Sample records for minimal source reconstructions

  1. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  2. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    NASA Astrophysics Data System (ADS)

    Karamehmedović, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-06-01

    We consider the multi-frequency inverse source problem for the scalar Helmholtz equation in the plane. The goal is to reconstruct the source term in the equation from measurements of the solution on a surface outside the support of the source. We study the problem in a certain finite dimensional setting: from measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier–Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction, and under an additional, mild assumption, the reconstruction method is shown to be stable. Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method is implemented numerically and our theoretical findings are supported by numerical experiments.

  3. Three-dimensional reconstruction of neutron, gamma-ray, and x-ray sources using spherical harmonic decomposition

    NASA Astrophysics Data System (ADS)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D.; Geppert-Kleinrath, V.; Grim, G.; Merrill, F. E.; Wilde, C. H.

    2017-11-01

    Neutron, gamma-ray, and x-ray imaging are important diagnostic tools at the National Ignition Facility (NIF) for measuring the two-dimensional (2D) size and shape of the neutron producing region, for probing the remaining ablator and measuring the extent of the DT plasmas during the stagnation phase of Inertial Confinement Fusion implosions. Due to the difficulty and expense of building these imagers, at most only a few two-dimensional projections images will be available to reconstruct the three-dimensional (3D) sources. In this paper, we present a technique that has been developed for the 3D reconstruction of neutron, gamma-ray, and x-ray sources from a minimal number of 2D projections using spherical harmonics decomposition. We present the detailed algorithms used for this characterization and the results of reconstructed sources from experimental neutron and x-ray data collected at OMEGA and NIF.

  4. Algorithms for bioluminescence tomography incorporating anatomical information and reconstruction of tissue optical properties

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2010-01-01

    Reconstruction algorithms are presented for a two-step solution of the bioluminescence tomography (BLT) problem. In the first step, a priori anatomical information provided by x-ray computed tomography or by other methods is used to solve the continuous wave (cw) diffuse optical tomography (DOT) problem. A Taylor series expansion approximates the light fluence rate dependence on the optical properties of each region where first and second order direct derivatives of the light fluence rate with respect to scattering and absorption coefficients are obtained and used for the reconstruction. In the second step, the reconstructed optical properties at different wavelengths are used to calculate the Green’s function of the system. Then an iterative minimization solution based on the L1 norm shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. This provides an efficient BLT reconstruction algorithm with the ability to determine relative source magnitudes and positions in the presence of noise. PMID:21258486

  5. Full statistical mode reconstruction of a light field via a photon-number-resolved measurement

    NASA Astrophysics Data System (ADS)

    Burenkov, I. A.; Sharma, A. K.; Gerrits, T.; Harder, G.; Bartley, T. J.; Silberhorn, C.; Goldschmidt, E. A.; Polyakov, S. V.

    2017-05-01

    We present a method to reconstruct the complete statistical mode structure and optical losses of multimode conjugated optical fields using an experimentally measured joint photon-number probability distribution. We demonstrate that this method evaluates classical and nonclassical properties using a single measurement technique and is well suited for quantum mesoscopic state characterization. We obtain a nearly perfect reconstruction of a field comprised of up to ten modes based on a minimal set of assumptions. To show the utility of this method, we use it to reconstruct the mode structure of an unknown bright parametric down-conversion source.

  6. On three-dimensional reconstruction of a neutron/x-ray source from very few two-dimensional projections

    DOE PAGES

    Volegov, P. L.; Danly, C. R.; Merrill, F. E.; ...

    2015-11-24

    The neutron imaging system at the National Ignition Facility is an important diagnostic tool for measuring the two-dimensional size and shape of the source of neutrons produced in the burning deuterium-tritium plasma during the stagnation phase of inertial confinement fusion implosions. Few two-dimensional projections of neutronimages are available to reconstruct the three-dimensionalneutron source. In our paper, we present a technique that has been developed for the 3Dreconstruction of neutron and x-raysources from a minimal number of 2D projections. Here, we present the detailed algorithms used for this characterization and the results of reconstructedsources from experimental data collected at Omega.

  7. Development of a directivity-controlled piezoelectric transducer for sound reproduction

    NASA Astrophysics Data System (ADS)

    Bédard, Magella; Berry, Alain

    2008-04-01

    Present sound reproduction systems do not attempt to simulate the spatial radiation of musical instruments, or sound sources in general, even though the spatial directivity has a strong impact on the psychoacoustic experience. A transducer consisting of 4 piezoelectric elemental sources made from curved PVDF films is used to generate a target directivity pattern in the horizontal plane, in the frequency range of 5-20 kHz. The vibratory and acoustical response of an elemental source is addressed, both theoretically and experimentally. Two approaches to synthesize the input signals to apply to each elemental source are developed in order to create a prescribed, frequency-dependent acoustic directivity. The circumferential Fourier decomposition of the target directivity provides a compromise between the magnitude and the phase reconstruction, whereas the minimization of a quadratic error criterion provides a best magnitude reconstruction. This transducer can improve sound reproduction by introducing the spatial radiation aspect of the original source at high frequency.

  8. Hyperspectral image reconstruction for x-ray fluorescence tomography

    DOE PAGES

    Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...

    2015-01-01

    A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less

  9. Reconstruction of source location in a network of gravitational wave interferometric detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavalier, Fabien; Barsuglia, Matteo; Bizouard, Marie-Anne

    2006-10-15

    This paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors, mainly the LIGO and Virgo detectors. We suppose that an event has been seen in coincidence using a filter applied on the three detector data streams. Using the arrival time (and its associated error) of the gravitational signal in each detector, the direction of the source in the sky is computed using a {chi}{sup 2} minimization technique. For reasonably large signals (SNR>4.5 in all detectors), the mean angular error between the real location and the reconstructedmore » one is about 1 deg. . We also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors. It appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors. Adding at least one other detector to the LIGO-Virgo network reduces the blind regions and in the case of 6 detectors, a precision less than 1 deg. on the source direction can be reached for 99% of the sky.« less

  10. Application of Polychromatic µCT for Mineral Density Determination

    PubMed Central

    Zou, W.; Hunter, N.; Swain, M.V.

    2011-01-01

    Accurate assessment of mineral density (MD) provides information critical to the understanding of mineralization processes of calcified tissues, including bones and teeth. High-resolution three-dimensional assessment of the MD of teeth has been demonstrated by relatively inaccessible synchrotron radiation microcomputed tomography (SRµCT). While conventional desktop µCT (CµCT) technology is widely available, polychromatic source and cone-shaped beam geometry confound MD assessment. Recently, considerable attention has been given to optimizing quantitative data from CµCT systems with polychromatic x-ray sources. In this review, we focus on the approaches that minimize inaccuracies arising from beam hardening, in particular, beam filtration during the scan, beam-hardening correction during reconstruction, and mineral density calibration. Filtration along with lowest possible source voltage results in a narrow and near-single-peak spectrum, favoring high contrast and minimal beam-hardening artifacts. More effective beam monochromatization approaches are described. We also examine the significance of beam-hardening correction in determining the accuracy of mineral density estimation. In addition, standards for the calibration of reconstructed grey-scale attenuation values against MD, including K2PHO4 liquid phantom, and polymer-hydroxyapatite (HA) and solid hydroxyapatite (HA) phantoms, are discussed. PMID:20858779

  11. Characterizing open and non-uniform vertical heat sources: towards the identification of real vertical cracks in vibrothermography experiments

    NASA Astrophysics Data System (ADS)

    Castelo, A.; Mendioroz, A.; Celorrio, R.; Salazar, A.; López de Uralde, P.; Gorosmendi, I.; Gorostegui-Colinas, E.

    2017-05-01

    Lock-in vibrothermography is used to characterize vertical kissing and open cracks in metals. In this technique the crack heats up during ultrasound excitation due mainly to friction between the defect's faces. We have solved the inverse problem, consisting in determining the heat source distribution produced at cracks under amplitude modulated ultrasound excitation, which is an ill-posed inverse problem. As a consequence the minimization of the residual is unstable. We have stabilized the algorithm introducing a penalty term based on Total Variation functional. In the inversion, we combine amplitude and phase surface temperature data obtained at several modulation frequencies. Inversions of synthetic data with added noise indicate that compact heat sources are characterized accurately and that the particular upper contours can be retrieved for shallow heat sources. The overall shape of open and homogeneous semicircular strip-shaped heat sources representing open half-penny cracks can also be retrieved but the reconstruction of the deeper end of the heat source loses contrast. Angle-, radius- and depth-dependent inhomogeneous heat flux distributions within these semicircular strips can also be qualitatively characterized. Reconstructions of experimental data taken on samples containing calibrated heat sources confirm the predictions from reconstructions of synthetic data. We also present inversions of experimental data obtained from a real welded Inconel 718 specimen. The results are in good qualitative agreement with the results of liquids penetrants testing.

  12. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  13. Minimal Model of Prey Localization through the Lateral-Line System

    NASA Astrophysics Data System (ADS)

    Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo

    2003-10-01

    The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.

  14. Spatiotemporal reconstruction of auditory steady-state responses to acoustic amplitude modulations: Potential sources beyond the auditory pathway.

    PubMed

    Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-03-01

    Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  16. Observing gravitational-wave transient GW150914 with minimal assumptions

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R. T.; De Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Haas, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinder, I.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Laguna, P.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Page, J.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-06-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be sensitive to gravitational waves emitted by a wide range of sources including binary black hole mergers. Over the observational period from September 12 to October 20, 2015, these transient searches were sensitive to binary black hole mergers similar to GW150914 to an average distance of ˜600 Mpc . In this paper, we describe the analyses that first detected GW150914 as well as the parameter estimation and waveform reconstruction techniques that initially identified GW150914 as the merger of two black holes. We find that the reconstructed waveform is consistent with the signal from a binary black hole merger with a chirp mass of ˜30 M⊙ and a total mass before merger of ˜70 M⊙ in the detector frame.

  17. A GPU-Based Architecture for Real-Time Data Assessment at Synchrotron Experiments

    NASA Astrophysics Data System (ADS)

    Chilingaryan, Suren; Mirone, Alessandro; Hammersley, Andrew; Ferrero, Claudio; Helfen, Lukas; Kopmann, Andreas; Rolo, Tomy dos Santos; Vagovic, Patrik

    2011-08-01

    Advances in digital detector technology leads presently to rapidly increasing data rates in imaging experiments. Using fast two-dimensional detectors in computed tomography, the data acquisition can be much faster than the reconstruction if no adequate measures are taken, especially when a high photon flux at synchrotron sources is used. We have optimized the reconstruction software employed at the micro-tomography beamlines of our synchrotron facilities to use the computational power of modern graphic cards. The main paradigm of our approach is the full utilization of all system resources. We use a pipelined architecture, where the GPUs are used as compute coprocessors to reconstruct slices, while the CPUs are preparing the next ones. Special attention is devoted to minimize data transfers between the host and GPU memory and to execute memory transfers in parallel with the computations. We were able to reduce the reconstruction time by a factor 30 and process a typical data set of 20 GB in 40 seconds. The time needed for the first evaluation of the reconstructed sample is reduced significantly and quasi real-time visualization is now possible.

  18. Towards disparity joint upsampling for robust stereoscopic endoscopic scene reconstruction in robotic prostatectomy

    NASA Astrophysics Data System (ADS)

    Luo, Xiongbiao; McLeod, A. Jonathan; Jayarathne, Uditha L.; Pautler, Stephen E.; Schlacta, Christopher M.; Peters, Terry M.

    2016-03-01

    Three-dimensional (3-D) scene reconstruction from stereoscopic binocular laparoscopic videos is an effective way to expand the limited surgical field and augment the structure visualization of the organ being operated in minimally invasive surgery. However, currently available reconstruction approaches are limited by image noise, occlusions, textureless and blurred structures. In particular, an endoscope inside the body only has the limited light source resulting in illumination non-uniformities in the visualized field. These limitations unavoidably deteriorate the stereo image quality and hence lead to low-resolution and inaccurate disparity maps, resulting in blurred edge structures in 3-D scene reconstruction. This paper proposes an improved stereo correspondence framework that integrates cost-volume filtering with joint upsampling for robust disparity estimation. Joint bilateral upsampling, joint geodesic upsampling, and tree filtering upsampling were compared to enhance the disparity accuracy. The experimental results demonstrate that joint upsampling provides an effective way to boost the disparity estimation and hence to improve the surgical endoscopic scene 3-D reconstruction. Moreover, the bilateral upsampling generally outperforms the other two upsampling methods in disparity estimation.

  19. Two-dimensional grid-free compressive beamforming.

    PubMed

    Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli

    2017-08-01

    Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.

  20. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    PubMed

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  1. Sequentially reweighted TV minimization for CT metal artifact reduction.

    PubMed

    Zhang, Xiaomeng; Xing, Lei

    2013-07-01

    Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.

  2. A limited-angle CT reconstruction method based on anisotropic TV minimization.

    PubMed

    Chen, Zhiqiang; Jin, Xin; Li, Liang; Wang, Ge

    2013-04-07

    This paper presents a compressed sensing (CS)-inspired reconstruction method for limited-angle computed tomography (CT). Currently, CS-inspired CT reconstructions are often performed by minimizing the total variation (TV) of a CT image subject to data consistency. A key to obtaining high image quality is to optimize the balance between TV-based smoothing and data fidelity. In the case of the limited-angle CT problem, the strength of data consistency is angularly varying. For example, given a parallel beam of x-rays, information extracted in the Fourier domain is mostly orthogonal to the direction of x-rays, while little is probed otherwise. However, the TV minimization process is isotropic, suggesting that it is unfit for limited-angle CT. Here we introduce an anisotropic TV minimization method to address this challenge. The advantage of our approach is demonstrated in numerical simulation with both phantom and real CT images, relative to the TV-based reconstruction.

  3. Study and comparison of different sensitivity models for a two-plane Compton camera.

    PubMed

    Muñoz, Enrique; Barrio, John; Bernabéu, José; Etxebeste, Ane; Lacasta, Carlos; Llosá, Gabriela; Ros, Ana; Roser, Jorge; Oliver, Josep F

    2018-06-25

    Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with 22 Na sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.

  4. Propeller thoracodorsal artery perforator flap for breast reconstruction.

    PubMed

    Angrigiani, Claudio; Rancati, Alberto; Escudero, Ezequiel; Artero, Guillermo; Gercovich, Gustavo; Deza, Ernesto Gil

    2014-08-01

    The thoracodorsal artery perforator (TDAP) flap has been described for breast reconstruction. This flap requires intramuscular dissection of the pedicle. A modification of the conventional TDAP surgical technique for breast reconstruction is described, utilizing instead a propeller TDAP flap. The authors present their clinical experience with the propeller TDAP flap in breast reconstruction alone or in combination with expanders or permanent implants. From January 2009 to February 2013, sixteen patients had breast reconstruction utilizing a propeller TDAP flap. Retrospective analysis of patient characteristics, clinical indications, procedure and outcomes were performed. The follow-up period ranged from 4 to 48 months. Sixteen patients had breast reconstruction using a TDAP flap with or without simultaneous insertion of an expander or implant. All flaps survived, while two cases required minimal resection due to distal flap necrosis, healing by second intention. There were not donor-site seromas, while minimal wound dehiscence was detected in two cases. The propeller TDAP flap appears to be safe and effective for breast reconstruction, resulting in minimal donor site morbidity. The use of this propeller flap emerges as a true alternative to the traditional TDAP flap.

  5. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  6. Contrast adaptive total p-norm variation minimization approach to CT reconstruction for artifact reduction in reduced-view brain perfusion CT

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Won; Kim, Jong-Hyo

    2011-03-01

    Perfusion CT (PCT) examinations are getting more frequently used for diagnosis of acute brain diseases such as hemorrhage and infarction, because the functional map images it produces such as regional cerebral blood flow (rCBF), regional cerebral blood volume (rCBV), and mean transit time (MTT) may provide critical information in the emergency work-up of patient care. However, a typical PCT scans the same slices several tens of times after injection of contrast agent, which leads to much increased radiation dose and is inevitability of growing concern for radiation-induced cancer risk. Reducing the number of views in projection in combination of TV minimization reconstruction technique is being regarded as an option for radiation reduction. However, reconstruction artifacts due to insufficient number of X-ray projections become problematic especially when high contrast enhancement signals are present or patient's motion occurred. In this study, we present a novel reconstruction technique using contrast-adaptive TpV minimization that can reduce reconstruction artifacts effectively by using different p-norms in high contrast and low contrast objects. In the proposed method, high contrast components are first reconstructed using thresholded projection data and low p-norm total variation to reflect sparseness in both projection and reconstruction spaces. Next, projection data are modified to contain only low contrast objects by creating projection data of reconstructed high contrast components and subtracting them from original projection data. Then, the low contrast projection data are reconstructed by using relatively high p-norm TV minimization technique, and are combined with the reconstructed high contrast component images to produce final reconstructed images. The proposed algorithm was applied to numerical phantom and a clinical data set of brain PCT exam, and the resultant images were compared with those using filtered back projection (FBP) and conventional TV reconstruction algorithm. Our results show the potential of the proposed algorithm for image quality improvement, which in turn may lead to dose reduction.

  7. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  8. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  9. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  10. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    PubMed

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  11. Edge guided image reconstruction in linear scan CT by weighted alternating direction TV minimization.

    PubMed

    Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2014-01-01

    Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.

  12. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  13. Scapular tip and latissimus dorsi osteomyogenous free flap for the reconstruction of a maxillectomy defect: A minimally invasive transaxillary approach.

    PubMed

    Park, Sung Joon; Jeong, Woo-Jin; Ahn, Soon-Hyun

    2017-11-01

    The purpose of this study was to propose a novel, minimally invasive transaxillary approach for harvesting the scapular tip and latissimus dorsi osteomyogenous free flap for the reconstruction of a maxillectomy defect. A retrospective case series study of 4 patients who underwent reconstruction using a scapular tip composite free flap through the transaxillary approach was conducted. The data (age, sex, pathology, previous treatment and adjuvant treatment) were collected and analysed. Total operation time, number of hospital days and the cosmetic and functional outcome of reconstruction were analysed. Two male and two female patients were enrolled in this study. The patients' ages ranged from 52 to 59 years. All the patients had maxillectomy defects, with at least a classification of Okay type II, which were successfully reconstructed using a scapular tip and latissimus dorsi free flap through a minimally invasive transaxillary approach. The entire operation time for the primary tumour surgery and reconstruction ranged from 6.2 to 12.1 h (mean, 11.1 h). The average length of the hospital stay was 13 days (range, 10-16 days). No major donor site morbidity was observed, and there was no graft failure that required revision or exploration surgery. The minimally invasive transaxillary approach for harvesting the scapular tip and latissimus dorsi osteomyogenous free flap for the reconstruction of maxillectomy defect is a promising approach for more favourable functional and aesthetic outcomes when compared to the use of other bone containing free flaps and the classic approach for harvesting scapular tip and latissimus dorsi free flap. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Optimization of view weighting in tilted-plane-based reconstruction algorithms to minimize helical artifacts in multi-slice helical CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang

    2003-05-01

    In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.

  15. The Characterization of Military Aircraft Jet Noise Using Near-Field Acoustical Holography Methods

    NASA Astrophysics Data System (ADS)

    Wall, Alan Thomas

    The noise emissions of jets from full-scale engines installed on military aircraft pose a significant hearing loss risk to military personnel. Noise reduction technologies and the development of operational procedures that minimize noise exposure to personnel are enhanced by the accurate characterization of noise sources within a jet. Hence, more than six decades of research have gone into jet noise measurement and prediction. In the past decade, the noise-source visualization tool near-field acoustical holography (NAH) has been applied to jets. NAH fits a weighted set of expansion wave functions, typically planar, cylindrical, or spherical, to measured sound pressures in the field. NAH measurements were made of a jet from an installed engine on a military aircraft. In the present study, the algorithm of statistically optimized NAH (SONAH) is modified to account for the presence of acoustic reflections from the concrete surface over which the jet was measured. The three dimensional field in the jet vicinity is reconstructed, and information about sources is inferred from reconstructions at the boundary of the turbulent jet flow. Then, a partial field decomposition (PFD) is performed, which represents the total field as the superposition of multiple, independent partial fields. This is the most direct attempt to equate partial fields with independent sources in a jet to date.

  16. Reconstruction of Craniomaxillofacial Bone Defects Using Tissue-Engineering Strategies with Injectable and Non-Injectable Scaffolds

    PubMed Central

    Gaihre, Bipin; Uswatta, Suren; Jayasuriya, Ambalangodage C.

    2017-01-01

    Engineering craniofacial bone tissues is challenging due to their complex structures. Current standard autografts and allografts have many drawbacks for craniofacial bone tissue reconstruction; including donor site morbidity and the ability to reinstate the aesthetic characteristics of the host tissue. To overcome these problems; tissue engineering and regenerative medicine strategies have been developed as a potential way to reconstruct damaged bone tissue. Different types of new biomaterials; including natural polymers; synthetic polymers and bioceramics; have emerged to treat these damaged craniofacial bone tissues in the form of injectable and non-injectable scaffolds; which are examined in this review. Injectable scaffolds can be considered a better approach to craniofacial tissue engineering as they can be inserted with minimally invasive surgery; thus protecting the aesthetic characteristics. In this review; we also focus on recent research innovations with different types of stem-cell sources harvested from oral tissue and growth factors used to develop craniofacial bone tissue-engineering strategies. PMID:29156629

  17. Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection

    NASA Astrophysics Data System (ADS)

    Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan

    2018-03-01

    X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400  ×  400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.

  18. The use of Achilles tendon allograft for latissimus dorsi tendon reconstruction: a minimally invasive technique.

    PubMed

    Sabzevari, Soheil; Chao, Tom; Kalawadia, Jay; Lin, Albert

    2018-01-01

    Treatment of subacute, retracted latissimus dorsi and teres major tendon ruptures in young overhead athletes is challenging. This case report describes management of a subacute retracted latissimus dorsi and teres major rupture with Achilles tendon allograft reconstruction using a two-incision minimally invasive technique. Level of evidence V.

  19. Laparoscopic Harvest of the Rectus Abdominis for Perineal Reconstruction

    PubMed Central

    Agochukwu, Nneamaka; Bonaroti, Alisha; Beck, Sandra

    2017-01-01

    Summary: The rectus abdominis is a workhorse flap for perineal reconstruction, in particular after abdominoperineal resection (APR). Laparoscopic and robotic techniques for abdominoperineal surgery are becoming more common. The open harvest of the rectus abdominis negates the advantages of these minimally invasive approaches. (Sentence relating to advantages of laparoscopic rectus deleted here.) We present our early experience with laparoscopic harvest of the rectus muscle for perineal reconstruction. Three laparoscopic unilateral rectus abdominis muscle harvests were performed for perineal reconstruction following minimally invasive colorectal and urological procedures. The 2 patients who underwent APR also had planned external perineal skin reconstruction with local flaps. (Sentence deleted here to shorten abstract.) All rectus muscle harvests were performed laparoscopically. Two were for perineal reconstruction following laparoscopic APR, and 1 was for anterior vaginal wall reconstruction. This was done with 4 ports positioned on the contralateral abdomen. The average laparoscopic harvest time was 60–90 minutes. The rectus muscle remained viable in all cases. One patient developed partial necrosis of a posterior thigh fasciocutaneous flap after cancer recurrence. There were no pelvic abscesses, or abdominal wall hernias. Laparoscopic harvest of the rectus appears to be a cost-effective, reliable, and reproducible procedure for perineal with minimal donor-site morbidity. Larger clinical studies are needed to further establish the efficacy and advantages of the laparoscopic rectus for perineal reconstruction. PMID:29263976

  20. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  1. The transverse musculocutaneous gracilis flap for breast reconstruction: guidelines for flap and patient selection.

    PubMed

    Schoeller, Thomas; Huemer, Georg M; Wechselberger, Gottfried

    2008-07-01

    The transverse musculocutaneous gracilis (TMG) flap has received little attention in the literature as a valuable alternative source of donor tissue in the setting of breast reconstruction. The authors give an in-depth review of their experience with breast reconstruction using the TMG flap. A retrospective review of 111 patients treated with a TMG flap for breast reconstruction in an immediate or a delayed setting between August of 2002 and July of 2007 was undertaken. Of these, 26 patients underwent bilateral reconstruction and 68 underwent unilateral reconstruction, and 17 patients underwent reconstruction unilaterally with a double TMG flap. Patient age ranged between 24 and 65 years (mean, 37 years). Twelve patients had to be taken back to the operating room because of flap-related problems and nine patients underwent successful revision microsurgically, resulting in three complete flap losses in a series of 111 patients with 154 transplanted TMG flaps. Partial flap loss was encountered in two patients, whereas fat tissue necrosis was managed conservatively in six patients. Donor-site morbidity was an advantage of this flap, with a concealed scar and minimal contour irregularities of the thigh, even in unilateral harvest. Complications included delayed wound healing (n = 10), hematoma (n = 5), and transient sensory deficit over the posterior thigh (n = 49). The TMG flap is more than an alternative to the deep inferior epigastric perforator (DIEP) flap in microsurgical breast reconstruction in selected patients. In certain indications, such as bilateral reconstructions, it possibly surpasses the DIEP flap because of a better concealed donor scar and easier harvest.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virador, Patrick R.G.

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems:more » (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data and (c) mash the in plane projections, i.e. 2D data, with the projection data from the first oblique angles, which are then used to reconstruct the preliminary image in the 3D Reprojection Projection algorithm. The author presents reconstructed images of point sources and extended sources in both 2D and 3D. The images show that the camera is anticipated to eliminate radial elongation and produce artifact free and essentially spatially isotropic images throughout the entire FOV. It has a resolution of 1.50 ± 0.75 mm FWHM near the center, 2.25 ±0.75 mm FWHM in the bulk of the FOV, and 3.00 ± 0.75 mm FWHM near the edge and corners of the FOV.« less

  3. Higher order reconstruction for MRI in the presence of spatiotemporal field perturbations.

    PubMed

    Wilm, Bertram J; Barmet, Christoph; Pavan, Matteo; Pruessmann, Klaas P

    2011-06-01

    Despite continuous hardware advances, MRI is frequently subject to field perturbations that are of higher than first order in space and thus violate the traditional k-space picture of spatial encoding. Sources of higher order perturbations include eddy currents, concomitant fields, thermal drifts, and imperfections of higher order shim systems. In conventional MRI with Fourier reconstruction, they give rise to geometric distortions, blurring, artifacts, and error in quantitative data. This work describes an alternative approach in which the entire field evolution, including higher order effects, is accounted for by viewing image reconstruction as a generic inverse problem. The relevant field evolutions are measured with a third-order NMR field camera. Algebraic reconstruction is then formulated such as to jointly minimize artifacts and noise in the resulting image. It is solved by an iterative conjugate-gradient algorithm that uses explicit matrix-vector multiplication to accommodate arbitrary net encoding. The feasibility and benefits of this approach are demonstrated by examples of diffusion imaging. In a phantom study, it is shown that higher order reconstruction largely overcomes variable image distortions that diffusion gradients induce in EPI data. In vivo experiments then demonstrate that the resulting geometric consistency permits straightforward tensor analysis without coregistration. Copyright © 2011 Wiley-Liss, Inc.

  4. Developing bioproxies of past ocean ecosystem change through compound-specific stable isotope analysis of proteinaceous deep-sea corals.

    NASA Astrophysics Data System (ADS)

    McMahon, K.; Williams, B.; Mccarthy, M. D.; Etnoyer, P. J.

    2015-12-01

    Our understanding of current and future ocean conditions is framed by our ability to reconstruct past changes in ecosystem structure and function recorded in paleoarchives. One such archive, proteinaceous deep-sea corals, act as "living sediment traps" with the potential to greatly improve our ability to reconstruct long-term, high-resolution biogeochemical records of export production. Compound-specific stable isotope analysis (CSIA) of individual amino acids (AAs) in deep-sea corals has provided highly detailed new tools to reconstruct changes in both plankton community composition and sources of nitrogen. However, to realize the full potential of CSIA in deep-sea corals, it is critical to better understand the link between the biogeochemical signatures of deep-sea coral polyp tissue and diagenetically resistant proteinaceous skeletal material. We conducted the first detailed comparison of δ13C and δ15N values for individual AAs between tissue and skeleton for three deep-sea coral genera (Primnoa, Isidella, and Kulamanamana). For δ13C values, we found minimal offsets in both essential and non-essential AAs across genera, strongly supporting coral skeleton AA fingerprinting as a new tool to reconstruct plankton community structure. Similarly, there was no significant offset in source AA δ15N values between tissue and skeleton, supporting the use of Phe δ15N as a proxy for baseline nitrogen sources. However, and rather unexpectedly, we found that the d15N values of the trophic AA group were consistently 3-4‰ lighter in skeleton than polyp tissue for all three genera. We hypothesize that this may reflect a partitioning of either N flux or pathways associated with AA transamination between polyp and skeleton tissues. This offset leads to an underestimate of trophic position using current CSIA-based calculations. Overall, our work strongly supports the applicability of CSIA in proteinaceous deep-sea corals to reconstruct past changes in biogeochemical cycling and plankton community dynamics. However, it also indicates that a new correction factor will be required to reconstruct accurate records of change in plankton trophic structure.

  5. Brief report: reconstruction of joint hyaline cartilage by autologous progenitor cells derived from ear elastic cartilage.

    PubMed

    Mizuno, Mitsuru; Kobayashi, Shinji; Takebe, Takanori; Kan, Hiroomi; Yabuki, Yuichiro; Matsuzaki, Takahisa; Yoshikawa, Hiroshi Y; Nakabayashi, Seiichiro; Ik, Lee Jeong; Maegawa, Jiro; Taniguchi, Hideki

    2014-03-01

    In healthy joints, hyaline cartilage covering the joint surfaces of bones provides cushioning due to its unique mechanical properties. However, because of its limited regenerative capacity, age- and sports-related injuries to this tissue may lead to degenerative arthropathies, prompting researchers to investigate a variety of cell sources. We recently succeeded in isolating human cartilage progenitor cells from ear elastic cartilage. Human cartilage progenitor cells have high chondrogenic and proliferative potential to form elastic cartilage with long-term tissue maintenance. However, it is unknown whether ear-derived cartilage progenitor cells can be used to reconstruct hyaline cartilage, which has different mechanical and histological properties from elastic cartilage. In our efforts to develop foundational technologies for joint hyaline cartilage repair and reconstruction, we conducted this study to obtain an answer to this question. We created an experimental canine model of knee joint cartilage damage, transplanted ear-derived autologous cartilage progenitor cells. The reconstructed cartilage was rich in proteoglycans and showed unique histological characteristics similar to joint hyaline cartilage. In addition, mechanical properties of the reconstructed tissues were higher than those of ear cartilage and equal to those of joint hyaline cartilage. This study suggested that joint hyaline cartilage was reconstructed from ear-derived cartilage progenitor cells. It also demonstrated that ear-derived cartilage progenitor cells, which can be harvested by a minimally invasive method, would be useful for reconstructing joint hyaline cartilage in patients with degenerative arthropathies. © AlphaMed Press.

  6. Sparse reconstruction of breast MRI using homotopic L0 minimization in a regional sparsified domain.

    PubMed

    Wong, Alexander; Mishra, Akshaya; Fieguth, Paul; Clausi, David A

    2013-03-01

    The use of MRI for early breast examination and screening of asymptomatic women has become increasing popular, given its ability to provide detailed tissue characteristics that cannot be obtained using other imaging modalities such as mammography and ultrasound. Recent application-oriented developments in compressed sensing theory have shown that certain types of magnetic resonance images are inherently sparse in particular transform domains, and as such can be reconstructed with a high level of accuracy from highly undersampled k-space data below Nyquist sampling rates using homotopic L0 minimization schemes, which holds great potential for significantly reducing acquisition time. An important consideration in the use of such homotopic L0 minimization schemes is the choice of sparsifying transform. In this paper, a regional differential sparsifying transform is investigated for use within a homotopic L0 minimization framework for reconstructing breast MRI. By taking local regional characteristics into account, the regional differential sparsifying transform can better account for signal variations and fine details that are characteristic of breast MRI than the popular finite differential transform, while still maintaining strong structure fidelity. Experimental results show that good breast MRI reconstruction accuracy can be achieved compared to existing methods.

  7. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  8. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  9. The advantages of a swept source optical coherence tomography system in the evaluation of occlusal disorders

    NASA Astrophysics Data System (ADS)

    Marcauteanu, Corina; Bradu, Adrian; Sinescu, Cosmin; Topala, Florin Ionel; Negrutiu, Meda Lavinia; Duma, Virgil Florin; Podoleanu, Adrian Gh.

    2014-01-01

    Occlusal disorders are characterized by multiple dental and periodontal signs. Some of these are reversible (such as excessive tooth mobility, fremitus, tooth pain, migration of teeth in the absence of periodontitis), some are not (pathological occlusal/incisal wear, abfractions, enamel cracks, tooth fractures, gingival recessions). In this paper we prove the advantages of a fast swept source OCT system in the diagnosis of pathological incisal wear, a key sign of the occlusal disorders. On 15 extracted frontal teeth four levels of pathological incisal wear facets were artificially created. After every level of induced defect, OCT scanning was performed. B scans were acquired and 3D reconstructions were generated. A swept source OCT instrument is used in this study. The swept source is has a central wavelength of 1050 nm and a sweeping rate of 100 kHz. A depth resolution determined by the swept source of 12 μm in air was experimentally measured. The pathological incisal wear is qualitatively observed on the B-scans as 2D images and 3D reconstructions (volumes). For quantitative evaluations of volumes, we used the Image J software. Our swept source OCT system has several advantages, including the ability to measure (in air) a minimal volume of 2352 μm3 and to collect high resolution volumetric images in 2.5 s. By calculating the areas of the amount of lost tissue corresponding to each difference of B-scans, the final volumes of incisal wear were obtained. This swept source OCT method is very useful for the dynamic evaluation of pathological incisal wear.

  10. Laparoendoscopic Management of Midureteral Strictures

    PubMed Central

    Komninos, Christos; Koo, Kyo Chul

    2014-01-01

    The incidence of ureteral strictures has increased worldwide owing to the widespread use of laparoscopic and endourologic procedures. Midureteral strictures can be managed by either an endoscopic approach or surgical reconstruction, including open or minimally invasive (laparoscopic/robotic) techniques. Minimally invasive surgical ureteral reconstruction is gaining in popularity in the management of midureteral strictures. However, only a few studies have been published so far regarding the safety and efficacy of laparoscopic and robotic ureteral reconstruction procedures. Nevertheless, most of the studies have reported at least equivalent outcomes with the open approach. In general, strictures more than 2 cm, injury strictures, and strictures associated either with radiation or with reduced renal function of less than 25% may be managed more appropriately by minimally invasive surgical reconstruction, although the evidence to establish these recommendations is not yet adequate. Defects of 2 to 3 cm in length may be treated with laparoscopic or robot-assisted uretero-ureterostomy, whereas defects of 12 to 15 cm may be managed either via ureteral reimplantation with a Boari flap or via transuretero-ureterostomy in case of low bladder capacity. Cases with more extended defects can be reconstructed with the incorporation of the ileum in ureteral repair. PMID:24466390

  11. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Inverse Electrocardiographic Source Localization of Ischemia: An Optimization Framework and Finite Element Solution

    PubMed Central

    Wang, Dafang; Kirby, Robert M.; MacLeod, Rob S.; Johnson, Chris R.

    2013-01-01

    With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem’s specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization. PMID:23913980

  13. Technique and outcomes of laparoscopic bulge repair after abdominal free flap reconstruction.

    PubMed

    Lee, Johnson C; Whipple, Lauren A; Binetti, Brian; Singh, T Paul; Agag, Richard

    2016-01-21

    Bulges and hernias after abdominal free flap surgery are uncommon with rates ranging from as low as 0-36%. In the free flap breast reconstruction population, there are no clear guidelines or optimal strategies to treating postoperative bulges. We describe our minimally invasive technique and outcomes in managing bulge complications in abdominal free flap breast reconstruction patients. A retrospective review was performed on all abdominal free flap breast reconstruction patients at Albany Medical Center from 2011 to 2014. All patients with bulges on clinical exam underwent abdominal CT imaging prior to consultation with a minimally invasive surgeon. Confirmed symptomatic bulges were repaired laparoscopically and patients were monitored regularly in the outpatient setting. Sixty-two patients received a total of 80 abdominal free flap breast reconstructions. Flap types included 41 deep inferior epigastric perforator (DIEP), 36 muscle-sparing transverse rectus abdominus myocutaneous (msTRAM), 2 superficial inferior epigastric artery, and 1 transverse rectus abdominus myocutaneous flap. There were a total of 9 (14.5%) bulge complications, with the majority of patients having undergone msTRAM or DIEP reconstruction. There were no complications, revisions, or recurrences from laparoscopic bulge repair after an average follow-up of 181 days. Although uncommon, bulge formation after abdominal free flap reconstruction can create significant morbidity to patients. Laproscopic hernia repair using composite mesh underlay offers an alternative to traditional open hernia repair and can be successfully used to minimize scarring, infection, and pain to free flap patients who have already undergone significant reconstructive procedures. © 2016 Wiley Periodicals, Inc. Microsurgery, 2016. © 2016 Wiley Periodicals, Inc.

  14. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  15. Validation of luminescent source reconstruction using spectrally resolved bioluminescence images

    NASA Astrophysics Data System (ADS)

    Virostko, John M.; Powers, Alvin C.; Jansen, E. D.

    2008-02-01

    This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.

  16. The application of multilobed flap designs for anatomic and functional oropharyngeal reconstructions.

    PubMed

    Choi, Jong Woo; Lee, Min Young; Oh, Tae Suk

    2013-11-01

    The oropharynx has a variety of functions, such as mastication, deglutition, articulation, taste, and airway protection. Because of its many roles, recent goals in head and neck reconstruction have focused on anatomic and functional reconstructions to minimize functional deficits. Since chemoradiation has earned a good reputation in the management of head and neck cancer, the manifestation of oropharyngeal defects has changed. Although we could not control the anatomic defects that were known to be related to the oropharyngeal functions, we hypothesized that optimizing the flap designs would be helpful for minimizing the functional deficits.Two hundred fifty cases of the head and neck reconstruction using free flaps were carried out between March 2006 and December 2010, where modified flap designs were applied. Among these, 37 tongue and 15 tonsillar reconstructions were analyzed for functional outcomes. The patients were of Asian ethnic background, and the average age was 52 years, including 38 males and 17 females. The average follow-up period was 20.5 months. Based on previous studies, the flap designs were categorized into type I, unilobe; type II, bilobe; type III, trilobe; type IV, quadrilobe; type V, additional lobe for lateral and posterior pharyngeal wall; and type VI, additional lobe for tongue base. The functional outcomes of both tongue and tonsillar reconstructions were investigated.To quantify the outcome in terms of swallowing and pronunciation, we analyzed the patients' function based on the 7-scale parameter. In terms of swallowing, the tongue reconstruction group scored 5.70 on average, whereas the tonsillar reconstruction group showed an average score of 4.53. With regard to speech intelligibility, the tongue reconstruction group revealed an average score of 5.67, whereas the tonsillar reconstruction group scored 5.46 on average.Our findings indicate that specification of the flap designs is helpful for minimizing the functional deficits in head and neck reconstructions.

  17. Algorithm-enabled exploration of image-quality potential of cone-beam CT in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan

    2015-06-01

    Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.

  18. 40 CFR 63.1191 - What notifications must I submit?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... becomes a major source. (2) A source that has an initial startup before the effective date of the standard. (3) A new or reconstructed source that has an initial startup after the effective date of the... major source or reconstruct a major source where the initial startup of the new or reconstructed source...

  19. 40 CFR 63.1191 - What notifications must I submit?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... becomes a major source. (2) A source that has an initial startup before the effective date of the standard. (3) A new or reconstructed source that has an initial startup after the effective date of the... major source or reconstruct a major source where the initial startup of the new or reconstructed source...

  20. 40 CFR 63.1191 - What notifications must I submit?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... becomes a major source. (2) A source that has an initial startup before the effective date of the standard. (3) A new or reconstructed source that has an initial startup after the effective date of the... major source or reconstruct a major source where the initial startup of the new or reconstructed source...

  1. On the assessment of spatial resolution of PET systems with iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Cherry, Simon R.; Qi, Jinyi

    2016-03-01

    Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.

  2. Fast ancestral gene order reconstruction of genomes with unequal gene content.

    PubMed

    Feijão, Pedro; Araujo, Eloi

    2016-11-11

    During evolution, genomes are modified by large scale structural events, such as rearrangements, deletions or insertions of large blocks of DNA. Of particular interest, in order to better understand how this type of genomic evolution happens, is the reconstruction of ancestral genomes, given a phylogenetic tree with extant genomes at its leaves. One way of solving this problem is to assume a rearrangement model, such as Double Cut and Join (DCJ), and find a set of ancestral genomes that minimizes the number of events on the input tree. Since this problem is NP-hard for most rearrangement models, exact solutions are practical only for small instances, and heuristics have to be used for larger datasets. This type of approach can be called event-based. Another common approach is based on finding conserved structures between the input genomes, such as adjacencies between genes, possibly also assigning weights that indicate a measure of confidence or probability that this particular structure is present on each ancestral genome, and then finding a set of non conflicting adjacencies that optimize some given function, usually trying to maximize total weight and minimizing character changes in the tree. We call this type of methods homology-based. In previous work, we proposed an ancestral reconstruction method that combines homology- and event-based ideas, using the concept of intermediate genomes, that arise in DCJ rearrangement scenarios. This method showed better rate of correctly reconstructed adjacencies than other methods, while also being faster, since the use of intermediate genomes greatly reduces the search space. Here, we generalize the intermediate genome concept to genomes with unequal gene content, extending our method to account for gene insertions and deletions of any length. In many of the simulated datasets, our proposed method had better results than MLGO and MGRA, two state-of-the-art algorithms for ancestral reconstruction with unequal gene content, while running much faster, making it more scalable to larger datasets. Studing ancestral reconstruction problems under a new light, using the concept of intermediate genomes, allows the design of very fast algorithms by greatly reducing the solution search space, while also giving very good results. The algorithms introduced in this paper were implemented in an open-source software called RINGO (ancestral Reconstruction with INtermediate GenOmes), available at https://github.com/pedrofeijao/RINGO .

  3. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  4. Network reconstruction via graph blending

    NASA Astrophysics Data System (ADS)

    Estrada, Rolando

    2016-05-01

    Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.

  5. Waveform Design for Multimedia Airborne Networks: Robust Multimedia Data Transmission in Cognitive Radio Networks

    DTIC Science & Technology

    2011-03-01

    at the sensor. According to Candes, Tao and Romberg [1], a small number of random projections of a signal that is compressible is all the...Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (AWGN ) De -noise Signal Original...Signal (Noisy) Random Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (Noiseless) De

  6. Reconstructing cortical current density by exploring sparseness in the transform domain

    NASA Astrophysics Data System (ADS)

    Ding, Lei

    2009-05-01

    In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.

  7. SU-E-J-02: 4D Digital Tomosynthesis Based On Algebraic Image Reconstruction and Total-Variation Minimization for the Improvement of Image Quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, D; Kang, S; Kim, T

    2014-06-01

    Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studiesmore » to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)« less

  8. Piecewise-Constant-Model-Based Interior Tomography Applied to Dentin Tubules

    DOE PAGES

    He, Peng; Wei, Biao; Wang, Steve; ...

    2013-01-01

    Dentin is a hierarchically structured biomineralized composite material, and dentin’s tubules are difficult to study in situ. Nano-CT provides the requisite resolution, but the field of view typically contains only a few tubules. Using a plate-like specimen allows reconstruction of a volume containing specific tubules from a number of truncated projections typically collected over an angular range of about 140°, which is practically accessible. Classical computed tomography (CT) theory cannot exactly reconstruct an object only from truncated projections, needless to say a limited angular range. Recently, interior tomography was developed to reconstruct a region-of-interest (ROI) from truncated data in amore » theoretically exact fashion via the total variation (TV) minimization under the condition that the ROI is piecewise constant. In this paper, we employ a TV minimization interior tomography algorithm to reconstruct interior microstructures in dentin from truncated projections over a limited angular range. Compared to the filtered backprojection (FBP) reconstruction, our reconstruction method reduces noise and suppresses artifacts. Volume rendering confirms the merits of our method in terms of preserving the interior microstructure of the dentin specimen.« less

  9. A variational reconstruction method for undersampled dynamic x-ray tomography based on physical motion models

    NASA Astrophysics Data System (ADS)

    Burger, Martin; Dirks, Hendrik; Frerking, Lena; Hauptmann, Andreas; Helin, Tapio; Siltanen, Samuli

    2017-12-01

    In this paper we study the reconstruction of moving object densities from undersampled dynamic x-ray tomography in two dimensions. A particular motivation of this study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. This restriction enforces a space-time reconstruction, which we perform by incorporating physical motion models and regularization of motion vectors in a variational framework. The methodology of optical flow, which is one of the most common methods to estimate motion between two images, is utilized to formulate a joint variational model for reconstruction and motion estimation. We provide a basic mathematical analysis of the forward model and the variational model for the image reconstruction. Moreover, we discuss the efficient numerical minimization based on alternating minimizations between images and motion vectors. A variety of results are presented for simulated and real measurement data with different sampling strategy. A key observation is that random sampling combined with our model allows reconstructions of similar amount of measurements and quality as a single static reconstruction.

  10. Hybrid light transport model based bioluminescence tomography reconstruction for early gastric cancer detection

    NASA Astrophysics Data System (ADS)

    Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie

    2012-03-01

    Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.

  11. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, B; Gao, H

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less

  12. Hyperedge bundling: A practical solution to spurious interactions in MEG/EEG source connectivity analyses.

    PubMed

    Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias

    2018-06-01

    Inter-areal functional connectivity (FC), neuronal synchronization in particular, is thought to constitute a key systems-level mechanism for coordination of neuronal processing and communication between brain regions. Evidence to support this hypothesis has been gained largely using invasive electrophysiological approaches. In humans, neuronal activity can be non-invasively recorded only with magneto- and electroencephalography (MEG/EEG), which have been used to assess FC networks with high temporal resolution and whole-scalp coverage. However, even in source-reconstructed MEG/EEG data, signal mixing, or "source leakage", is a significant confounder for FC analyses and network localization. Signal mixing leads to two distinct kinds of false-positive observations: artificial interactions (AI) caused directly by mixing and spurious interactions (SI) arising indirectly from the spread of signals from true interacting sources to nearby false loci. To date, several interaction metrics have been developed to solve the AI problem, but the SI problem has remained largely intractable in MEG/EEG all-to-all source connectivity studies. Here, we advance a novel approach for correcting SIs in FC analyses using source-reconstructed MEG/EEG data. Our approach is to bundle observed FC connections into hyperedges by their adjacency in signal mixing. Using realistic simulations, we show here that bundling yields hyperedges with good separability of true positives and little loss in the true positive rate. Hyperedge bundling thus significantly decreases graph noise by minimizing the false-positive to true-positive ratio. Finally, we demonstrate the advantage of edge bundling in the visualization of large-scale cortical networks with real MEG data. We propose that hypergraphs yielded by bundling represent well the set of true cortical interactions that are detectable and dissociable in MEG/EEG connectivity analysis. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.

  14. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstructionmore » to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion-blurring artifacts are present, leading to a 24.4% relative reconstruction error in the NACT phantom. View aliasing artifacts are present in 4D-CBCT reconstructed by FDK from 20 projections, with a relative error of 32.1%. When total variation minimization is used to reconstruct 4D-CBCT, the relative error is 18.9%. Image quality of 4D-CBCT is substantially improved by using the SMEIR algorithm and relative error is reduced to 7.6%. The maximum error (MaxE) of tumor motion determined from the DVF obtained by demons registration on a FDK-reconstructed 4D-CBCT is 3.0, 2.3, and 7.1 mm along left–right (L-R), anterior–posterior (A-P), and superior–inferior (S-I) directions, respectively. From the DVF obtained by demons registration on 4D-CBCT reconstructed by total variation minimization, the MaxE of tumor motion is reduced to 1.5, 0.5, and 5.5 mm along L-R, A-P, and S-I directions. From the DVF estimated by SMEIR algorithm, the MaxE of tumor motion is further reduced to 0.8, 0.4, and 1.5 mm along L-R, A-P, and S-I directions, respectively.Conclusions: The proposed SMEIR algorithm is able to estimate a motion model and reconstruct motion-compensated 4D-CBCT. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.« less

  15. Single-shot full resolution region-of-interest (ROI) reconstruction in image plane digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Singh, Mandeep; Khare, Kedar

    2018-05-01

    We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.

  16. Breathing motion compensated reconstruction for C-arm cone beam CT imaging: initial experience based on animal data

    NASA Astrophysics Data System (ADS)

    Schäfer, D.; Lin, M.; Rao, P. P.; Loffroy, R.; Liapi, E.; Noordhoek, N.; Eshuis, P.; Radaelli, A.; Grass, M.; Geschwind, J.-F. H.

    2012-03-01

    C-arm based tomographic 3D imaging is applied in an increasing number of minimal invasive procedures. Due to the limited acquisition speed for a complete projection data set required for tomographic reconstruction, breathing motion is a potential source of artifacts. This is the case for patients who cannot comply breathing commands (e.g. due to anesthesia). Intra-scan motion estimation and compensation is required. Here, a scheme for projection based local breathing motion estimation is combined with an anatomy adapted interpolation strategy and subsequent motion compensated filtered back projection. The breathing motion vector is measured as a displacement vector on the projections of a tomographic short scan acquisition using the diaphragm as a landmark. Scaling of the displacement to the acquisition iso-center and anatomy adapted volumetric motion vector field interpolation delivers a 3D motion vector per voxel. Motion compensated filtered back projection incorporates this motion vector field in the image reconstruction process. This approach is applied in animal experiments on a flat panel C-arm system delivering improved image quality (lower artifact levels, improved tumor delineation) in 3D liver tumor imaging.

  17. MO-DE-207A-10: One-Step CT Reconstruction for Metal Artifact Reduction by a Modification of Penalized Weighted Least-Squares (PWLS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J

    Purpose: Metal objects create severe artifacts in kilo-voltage (kV) CT image reconstructions due to the high attenuation coefficients of high atomic number objects. Most of the techniques devised to reduce this artifact utilize a two-step approach, which do not reliably yield the qualified reconstructed images. Thus, for accuracy and simplicity, this work presents a one-step reconstruction method based on a modified penalized weighted least-squares (PWLS) technique. Methods: Existing techniques for metal artifact reduction mostly adopt a two-step approach, which conduct additional reconstruction with the modified projection data from the initial reconstruction. This procedure does not consistently perform well due tomore » the uncertainties in manipulating the metal-contaminated projection data by thresholding and linear interpolation. This study proposes a one-step reconstruction process using a new PWLS operation with total-variation (TV) minimization, while not manipulating the projection. The PWLS for CT reconstruction has been investigated using a pre-defined weight, based on the variance of the projection datum at each detector bin. It works well when reconstructing CT images from metal-free projection data, which does not appropriately penalize metal-contaminated projection data. The proposed work defines the weight at each projection element under the assumption of a Poisson random variable. This small modification using element-wise penalization has a large impact in reducing metal artifacts. For evaluation, the proposed technique was assessed with two noisy, metal-contaminated digital phantoms, against the existing PWLS with TV minimization and the two-step approach. Result: The proposed PWLS with TV minimization greatly improved the metal artifact reduction, relative to the other techniques, by watching the results. Numerically, the new approach lowered the normalized root-mean-square error about 30 and 60% for the two cases, respectively, compared to the two-step method. Conclusion: A new PWLS operation shows promise for improving metal artifact reduction in CT imaging, as well as simplifying the reconstructing procedure.« less

  18. SU-G-BRA-11: Tumor Tracking in An Iterative Volume of Interest Based 4D CBCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T; Ahmad, M

    2016-06-15

    Purpose: 4D CBCT can allow evaluation of tumor motion immediately prior to radiation therapy, but suffers from heavy artifacts that limit its ability to track tumors. Various iterative and compressed sensing reconstructions have been proposed to reduce these artifacts, but are costly time-wise and can degrade the image quality of bony anatomy for alignment with regularization. We have previously proposed an iterative volume of interest (I4D VOI) method which minimizes reconstruction time and maintains image quality of bony anatomy by focusing a 4D reconstruction within a VOI. The purpose of this study is to test the tumor tracking accuracy ofmore » this method compared to existing methods. Methods: Long scan (8–10 mins) CBCT data with corresponding RPM data was collected for 12 lung cancer patients. The full data set was sorted into 8 phases and reconstructed using FDK cone beam reconstruction to serve as a gold standard. The data was reduced in way that maintains a normal breathing pattern and used to reconstruct 4D images using FDK, low and high regularization TV minimization (λ=2,10), and the proposed I4D VOI method with PTVs used for the VOI. Tumor trajectories were found using rigid registration within the VOI for each reconstruction and compared to the gold standard. Results: The root mean square error (RMSE) values were 2.70mm for FDK, 2.50mm for low regularization TV, 1.48mm for high regularization TV, and 2.34mm for I4D VOI. Streak artifacts in I4D VOI were reduced compared to FDK and images were less blurred than TV reconstructed images. Conclusion: I4D VOI performed at least as well as existing methods in tumor tracking, with the exception of high regularization TV minimization. These results along with the reconstruction time and outside VOI image quality advantages suggest I4D VOI to be an improvement over existing methods. Funding support provided by CPRIT grant RP110562-P2-01.« less

  19. Distortion outage minimization in Nakagami fading using limited feedback

    NASA Astrophysics Data System (ADS)

    Wang, Chih-Hong; Dey, Subhrakanti

    2011-12-01

    We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.

  20. Minimal entropy reconstructions of thermal images for emissivity correction

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.

    1999-03-01

    Low emissivity with corresponding low thermal emission is a problem which has long afflicted infrared thermography. The problem is aggravated by reflected thermal energy which increases as the emissivity decreases, thus reducing the net signal-to-noise ratio, which degrades the resulting temperature reconstructions. Additional errors are introduced from the traditional emissivity-correction approaches, wherein one attempts to correct for emissivity either using thermocouples or using one or more baseline images, collected at known temperatures. These corrections are numerically equivalent to image differencing. Errors in the baseline images are therefore additive, causing the resulting measurement error to either double or triple. The practical application of thermal imagery usually entails coating the objective surface to increase the emissivity to a uniform and repeatable value. While the author recommends that the thermographer still adhere to this practice, he has devised a minimal entropy reconstructions which not only correct for emissivity variations, but also corrects for variations in sensor response, using the baseline images at known temperatures to correct for these values. The minimal energy reconstruction is actually based on a modified Hopfield neural network which finds the resulting image which best explains the observed data and baseline data, having minimal entropy change between adjacent pixels. The autocorrelation of temperatures between adjacent pixels is a feature of most close-up thermal images. A surprising result from transient heating data indicates that the resulting corrected thermal images have less measurement error and are closer to the situational truth than the original data.

  1. Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects.

    PubMed

    Meng, Yuanzheng; Gong, Hui; Yang, Xiaoquan

    2013-02-01

    A novel online method based on the symmetry property of the sum of projections (SOP) is proposed to obtain the geometric parameters in cone-beam computed tomography (CBCT). This method requires no calibration phantom and can be used in circular trajectory CBCT with arbitrary cone angles. An objective function is deduced to illustrate the dependence of the symmetry of SOP on geometric parameters, which will converge to its minimum when the geometric parameters achieve their true values. Thus, by minimizing the objective function, we can obtain the geometric parameters for image reconstruction. To validate this method, numerical phantom studies with different noise levels are simulated. The results show that our method is insensitive to the noise and can determine the skew (in-plane rotation angle of the detector), the roll (rotation angle around the projection of the rotation axis on the detector), and the rotation axis with high accuracy, while the mid-plane and source-to-detector distance will be obtained with slightly lower accuracy. However, our simulation studies validate that the errors of the latter two parameters brought by our method will hardly degrade the quality of reconstructed images. The small animal studies show that our method is able to deal with arbitrary imaging objects. In addition, the results of the reconstructed images in different slices demonstrate that we have achieved comparable image quality in the reconstructions as some offline methods.

  2. Reconstruction of Vectorial Acoustic Sources in Time-Domain Tomography

    PubMed Central

    Xia, Rongmin; Li, Xu; He, Bin

    2009-01-01

    A new theory is proposed for the reconstruction of curl-free vector field, whose divergence serves as acoustic source. The theory is applied to reconstruct vector acoustic sources from the scalar acoustic signals measured on a surface enclosing the source area. It is shown that, under certain conditions, the scalar acoustic measurements can be vectorized according to the known measurement geometry and subsequently be used to reconstruct the original vector field. Theoretically, this method extends the application domain of the existing acoustic reciprocity principle from a scalar field to a vector field, indicating that the stimulating vectorial source and the transmitted acoustic pressure vector (acoustic pressure vectorized according to certain measurement geometry) are interchangeable. Computer simulation studies were conducted to evaluate the proposed theory, and the numerical results suggest that reconstruction of a vector field using the proposed theory is not sensitive to variation in the detecting distance. The present theory may be applied to magnetoacoustic tomography with magnetic induction (MAT-MI) for reconstructing current distribution from acoustic measurements. A simulation on MAT-MI shows that, compared to existing methods, the present method can give an accurate estimation on the source current distribution and a better conductivity reconstruction. PMID:19211344

  3. Determination of tailored filter sets to create rayfiles including spatial and angular resolved spectral information.

    PubMed

    Rotscholl, Ingo; Trampert, Klaus; Krüger, Udo; Perner, Martin; Schmidt, Franz; Neumann, Cornelius

    2015-11-16

    To simulate and optimize optical designs regarding perceived color and homogeneity in commercial ray tracing software, realistic light source models are needed. Spectral rayfiles provide angular and spatial varying spectral information. We propose a spectral reconstruction method with a minimum of time consuming goniophotometric near field measurements with optical filters for the purpose of creating spectral rayfiles. Our discussion focuses on the selection of the ideal optical filter combination for any arbitrary spectrum out of a given filter set by considering measurement uncertainties with Monte Carlo simulations. We minimize the simulation time by a preselection of all filter combinations, which bases on factorial design.

  4. 40 CFR Table 3 to Subpart Zzzz of... - Subsequent Performance Tests

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... reconstructed 2SLB stationary RICE with a brake horsepower > 500 located at major sources; new or reconstructed 4SLB stationary RICE with a brake horsepower ≥ 250 located at major sources; and new or reconstructed CI stationary RICE with a brake horsepower > 500 located at major sources Reduce CO emissions and not...

  5. Determination of the Core of a Minimal Bacterial Gene Set†

    PubMed Central

    Gil, Rosario; Silva, Francisco J.; Peretó, Juli; Moya, Andrés

    2004-01-01

    The availability of a large number of complete genome sequences raises the question of how many genes are essential for cellular life. Trying to reconstruct the core of the protein-coding gene set for a hypothetical minimal bacterial cell, we have performed a computational comparative analysis of eight bacterial genomes. Six of the analyzed genomes are very small due to a dramatic genome size reduction process, while the other two, corresponding to free-living relatives, are larger. The available data from several systematic experimental approaches to define all the essential genes in some completely sequenced bacterial genomes were also considered, and a reconstruction of a minimal metabolic machinery necessary to sustain life was carried out. The proposed minimal genome contains 206 protein-coding genes with all the genetic information necessary for self-maintenance and reproduction in the presence of a full complement of essential nutrients and in the absence of environmental stress. The main features of such a minimal gene set, as well as the metabolic functions that must be present in the hypothetical minimal cell, are discussed. PMID:15353568

  6. Plenoptic particle image velocimetry with multiple plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2018-07-01

    Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.

  7. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  8. High-speed asynchronous data mulitiplexer/demultiplexer for high-density digital recorders

    NASA Astrophysics Data System (ADS)

    Berdugo, Albert; Small, Martin B.

    1996-11-01

    Modern High Density Digital Recorders are ideal devices for the storage of large amounts of digital and/or wideband analog data. Ruggedized versions of these recorders are currently available and are supporting many military and commercial flight test applications. However, in certain cases, the storage format becomes very critical, e.g., when a large number of data types are involved, or when channel- to-channel correlation is critical, or when the original data source must be accurately recreated during post mission analysis. A properly designed storage format will not only preserve data quality, but will yield the maximum storage capacity and record time for any given recorder family or data type. This paper describes a multiplex/demultiplex technique that formats multiple high speed data sources into a single, common format for recording. The method is compatible with many popular commercial recorder standards such as DCRsi, VLDS, and DLT. Types of input data typically include PCM, wideband analog data, video, aircraft data buses, avionics, voice, time code, and many others. The described method preserves tight data correlation with minimal data overhead. The described technique supports full reconstruction of the original input signals during data playback. Output data correlation across channels is preserved for all types of data inputs. Simultaneous real- time data recording and reconstruction are also supported.

  9. Time-efficient high-resolution whole-brain three-dimensional macromolecular proton fraction mapping

    PubMed Central

    Yarnykh, Vasily L.

    2015-01-01

    Purpose Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole-brain MPF mapping technique utilizing a minimal possible number of source images for scan time reduction. Methods The described technique is based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole-brain three-dimensional MPF mapping with isotropic 1.25×1.25×1.25 mm3 voxel size and scan time of 20 minutes. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from 8 healthy subjects. Results Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (<2%). High-resolution MPF maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details including gray matter structures with high iron content. Conclusions Synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. PMID:26102097

  10. Magnetoacoustic Tomography with Magnetic Induction: Bioimepedance reconstruction through vector source imaging

    PubMed Central

    Mariappan, Leo; He, Bin

    2013-01-01

    Magneto acoustic tomography with magnetic induction (MAT-MI) is a technique proposed to reconstruct the conductivity distribution in biological tissue at ultrasound imaging resolution. A magnetic pulse is used to generate eddy currents in the object, which in the presence of a static magnetic field induces Lorentz force based acoustic waves in the medium. This time resolved acoustic waves are collected with ultrasound transducers and, in the present work, these are used to reconstruct the current source which gives rise to the MAT-MI acoustic signal using vector imaging point spread functions. The reconstructed source is then used to estimate the conductivity distribution of the object. Computer simulations and phantom experiments are performed to demonstrate conductivity reconstruction through vector source imaging in a circular scanning geometry with a limited bandwidth finite size piston transducer. The results demonstrate that the MAT-MI approach is capable of conductivity reconstruction in a physical setting. PMID:23322761

  11. Low-dose X-ray CT reconstruction via dictionary learning.

    PubMed

    Xu, Qiong; Yu, Hengyong; Mou, Xuanqin; Zhang, Lei; Hsieh, Jiang; Wang, Ge

    2012-09-01

    Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures.

  12. Low-Dose X-ray CT Reconstruction via Dictionary Learning

    PubMed Central

    Xu, Qiong; Zhang, Lei; Hsieh, Jiang; Wang, Ge

    2013-01-01

    Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures. PMID:22542666

  13. Cardiac C-arm computed tomography using a 3D + time ROI reconstruction method with spatial and temporal regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mory, Cyril, E-mail: cyril.mory@philips.com; Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes; Auvray, Vincent

    2014-02-15

    Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method,more » which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.« less

  14. Boomerang flap reconstruction for the breast.

    PubMed

    Baumholtz, Michael A; Al-Shunnar, Buthainah M; Dabb, Richard W

    2002-07-01

    The boomerang-shaped latissimus dorsi musculocutaneous flap for breast reconstruction offers a stable platform for breast reconstruction. It allows for maximal aesthetic results with minimal complications. The authors describe a skin paddle to obtain a larger volume than either the traditional elliptical skin paddle or the extended latissimus flap. There are three specific advantages to the boomerang design: large volume, conical shape (often lacking in the traditional skin paddle), and an acceptable donor scar. Thirty-eight flaps were performed. No reconstruction interfered with patient's ongoing oncological regimen. The most common complication was seroma, which is consistent with other latissimus reconstructions.

  15. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  16. A Model of Regularization Parameter Determination in Low-Dose X-Ray CT Reconstruction Based on Dictionary Learning.

    PubMed

    Zhang, Cheng; Zhang, Tao; Zheng, Jian; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui

    2015-01-01

    In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time.

  17. Suture Anchors Fixation in MPFL Reconstruction using a Bioactive Synthetic Ligament

    PubMed Central

    Berruto, Massimo; Ferrua, Paolo; Tradati, Daniele; Uboldi, Francesco; Usellini, Eva; Marelli, Bruno Michele

    2017-01-01

    Medial patellofemoral ligament (MPFL) reconstruction has a key role in patellofemoral instability surgery. Many surgical techniques have been described so far using different types of grafts (autologous, heterologous, or synthetic) and fixation techniques. The hereby described technique for MPFL reconstruction relies on the use of a biosynthetic graft (LARS Arc Sur Tille, France). Fixation is obtained by means of suture anchors on the patellar side and a resorbable interference screw on the femoral side locating the insertion point according to Schottle et al. An early passive range of motion (ROM) recovery is fundamental to reduce the risk of postoperative stiffness; a partial weight bearing with crutches is allowed until 6 weeks after the surgery. In our experience, the use of a biosynthetic graft and suture anchors provides stable fixation, minimizing donor site morbidity and reducing the risk of patellar fracture associated with transosseous tunnels. This technique represents a reliable and reproducible alternative for MPFL reconstruction, thereby minimizing the risk of possible complications. PMID:29270552

  18. Primary urethral reconstruction: the cost minimized approach to the bulbous urethral stricture.

    PubMed

    Rourke, Keith F; Jordan, Gerald H

    2005-04-01

    Treatment for urethral stricture disease often requires a choice between readily available direct vision internal urethrotomy (DVIU) and highly efficacious but more technically complex open urethral reconstruction. Using the short segment bulbous urethral stricture as a model, we determined which strategy is less costly. The costs of DVIU and open urethral reconstruction with stricture excision and primary anastomosis for a 2 cm bulbous urethral stricture were compared using a cost minimization decision analysis model. Clinical probability estimates for the DVIU treatment arm were the risk of bleeding, urinary tract infection and the risk of stricture recurrence. Estimates for the primary urethral reconstruction strategy were the risk of wound complications, complications of exaggerated lithotomy and the risk of treatment failure. Direct third party payer costs were determined in 2002 United States dollars. The model predicted that treatment with DVIU was more costly (17,747 dollars per patient) than immediate open urethral reconstruction (16,444 dollars per patient). This yielded an incremental cost savings of $1,304 per patient, favoring urethral reconstruction. Sensitivity analysis revealed that primary treatment with urethroplasty was economically advantageous within the range of clinically relevant events. Treatment with DVIU became more favorable when the long-term risk of stricture recurrence after DVIU was less than 60%. Treatment for short segment bulbous urethral strictures with primary reconstruction is less costly than treatment with DVIU. From a fiscal standpoint urethral reconstruction should be considered over DVIU in the majority of clinical circumstances.

  19. Minimization of the energy loss of nuclear power plants in case of partial in-core monitoring system failure

    NASA Astrophysics Data System (ADS)

    Zagrebaev, A. M.; Ramazanov, R. N.; Lunegova, E. A.

    2017-01-01

    In this paper we consider the optimization problem minimize of the energy loss of nuclear power plants in case of partial in-core monitoring system failure. It is possible to continuation of reactor operation at reduced power or total replacement of the channel neutron measurements, requiring shutdown of the reactor and the stock of detectors. This article examines the reconstruction of the energy release in the core of a nuclear reactor on the basis of the indications of height sensors. The missing measurement information can be reconstructed by mathematical methods, and replacement of the failed sensors can be avoided. It is suggested that a set of ‘natural’ functions determined by means of statistical estimates obtained from archival data be constructed. The procedure proposed makes it possible to reconstruct the field even with a significant loss of measurement information. Improving the accuracy of the restoration of the neutron flux density in partial loss of measurement information to minimize the stock of necessary components and the associated losses.

  20. Source reconstruction via the spatiotemporal Kalman filter and LORETA from EEG time series with 32 or fewer electrodes.

    PubMed

    Hamid, Laith; Al Farawn, Ali; Merlet, Isabelle; Japaridze, Natia; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Wendling, Fabrice; Siniatchkin, Michael

    2017-07-01

    The clinical routine of non-invasive electroencephalography (EEG) is usually performed with 8-40 electrodes, especially in long-term monitoring, infants or emergency care. There is a need in clinical and scientific brain imaging to develop inverse solution methods that can reconstruct brain sources from these low-density EEG recordings. In this proof-of-principle paper we investigate the performance of the spatiotemporal Kalman filter (STKF) in EEG source reconstruction with 9-, 19- and 32- electrodes. We used simulated EEG data of epileptic spikes generated from lateral frontal and lateral temporal brain sources using state-of-the-art neuronal population models. For validation of source reconstruction, we compared STKF results to the location of the simulated source and to the results of low-resolution brain electromagnetic tomography (LORETA) standard inverse solution. STKF consistently showed less localization bias compared to LORETA, especially when the number of electrodes was decreased. The results encourage further research into the application of the STKF in source reconstruction of brain activity from low-density EEG recordings.

  1. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei

    2016-08-15

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem,more » we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.« less

  2. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei; Wang, Linyuan; Cai, Ailong; Li, Zhongguo; Yan, Bin

    2016-08-01

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem, we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.

  3. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    PubMed

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. Image reconstruction

    NASA Astrophysics Data System (ADS)

    Vasilenko, Georgii Ivanovich; Taratorin, Aleksandr Markovich

    Linear, nonlinear, and iterative image-reconstruction (IR) algorithms are reviewed. Theoretical results are presented concerning controllable linear filters, the solution of ill-posed functional minimization problems, and the regularization of iterative IR algorithms. Attention is also given to the problem of superresolution and analytical spectrum continuation, the solution of the phase problem, and the reconstruction of images distorted by turbulence. IR in optical and optical-digital systems is discussed with emphasis on holographic techniques.

  5. CT coronary angiography: impact of adapted statistical iterative reconstruction (ASIR) on coronary stenosis and plaque composition analysis.

    PubMed

    Fuchs, Tobias A; Fiechter, Michael; Gebhard, Cathérine; Stehli, Julia; Ghadri, Jelena R; Kazakauskaite, Egle; Herzog, Bernhard A; Husmann, Lars; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-03-01

    To assess the impact of adaptive statistical iterative reconstruction (ASIR) on coronary plaque volume and composition analysis as well as on stenosis quantification in high definition coronary computed tomography angiography (CCTA). We included 50 plaques in 29 consecutive patients who were referred for the assessment of known or suspected coronary artery disease (CAD) with contrast-enhanced CCTA on a 64-slice high definition CT scanner (Discovery HD 750, GE Healthcare). CCTA scans were reconstructed with standard filtered back projection (FBP) with no ASIR (0 %) or with increasing contributions of ASIR, i.e. 20, 40, 60, 80 and 100 % (no FBP). Plaque analysis (volume, components and stenosis degree) was performed using a previously validated automated software. Mean values for minimal diameter and minimal area as well as degree of stenosis did not change significantly using different ASIR reconstructions. There was virtually no impact of reconstruction algorithms on mean plaque volume or plaque composition (e.g. soft, intermediate and calcified component). However, with increasing ASIR contribution, the percentage of plaque volume component between 401 and 500 HU decreased significantly (p < 0.05). Modern image reconstruction algorithms such as ASIR, which has been developed for noise reduction in latest high resolution CCTA scans, can be used reliably without interfering with the plaque analysis and stenosis severity assessment.

  6. 10 Years Later: Lessons Learned from an Academic Multidisciplinary Cosmetic Center

    PubMed Central

    Chen, Jenny T.; Nayar, Harry S.

    2017-01-01

    Background: In 2006, a Centers for Medicare and Medicaid Services-accredited multidisciplinary academic ambulatory surgery center was established with the goal of delivering high-quality, efficient reconstructive, and cosmetic services in an academic setting. We review our decade-long experience since its establishment. Methods: Clinical and financial data from 2006 to 2016 are reviewed. All cosmetic procedures, including both minimally invasive and operative cases, are included. Data are compared to nationally published reports. Results: Nearly 3,500 cosmetic surgeries and 10,000 minimally invasive procedures were performed. Compared with national averages, surgical volume in abdominoplasty is high, whereas rhinoplasty and breast augmentation is low. Regarding trend data, breast augmentation volume has decreased by 25%, whereas minimally invasive procedural volume continues to grow and is comparable with national reports. Similarly, where surgical revenue remains steady, minimally invasive revenue has increased significantly. The majority of surgical cases (70%) are reconstructive in nature and insurance-based. Payer mix is 71% private insurance, 18% Medicare and Medicaid, and 11% self-pay. Despite year-over-year revenue increases, net profit in 2015 was $6,120. Rent and anesthesia costs exceed national averages, and employee salary and wages are the highest expenditure. Conclusion: Although the creation of our academic cosmetic ambulatory surgery center has greatly increased the overall volume of cosmetic surgery performed at the University of Wisconsin, the majority of surgical volume and revenue is reconstructive. As is seen nationwide, minimally invasive cosmetic procedures represent our most rapidly expanding revenue stream. PMID:29062640

  7. Adaptive-weighted Total Variation Minimization for Sparse Data toward Low-dose X-ray Computed Tomography Image Reconstruction

    PubMed Central

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-01-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621

  8. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    PubMed

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-07

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  9. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  10. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.

  11. Evaluation of measures to minimize wildlife vehicle collisions and maintain wildlife permeability across highways : Arizona Route 260

    DOT National Transportation Integrated Search

    2007-08-01

    The authors conducted wildlife-highway relationships research from 2002-2006 along a 17-mile stretch of State Route 260 in Arizona which is being reconstructed in five phases with 11 wildlife underpasses and six bridges. Reconstruction phasing allowe...

  12. 40 CFR 63.42 - Program requirements governing construction or reconstruction of major sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... construction or reconstruction of major sources. 63.42 Section 63.42 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES Requirements for Control Technology Determinations for Major Sources in... achievable control technology emission limitation for new sources. [61 FR 68400, Dec. 27, 1996, as amended at...

  13. Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds

    PubMed Central

    Lazar, Aurel A.; Pnevmatikakis, Eftychios A.

    2013-01-01

    We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610

  14. Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm

    PubMed Central

    Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed

    2008-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581

  15. Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.

    PubMed

    Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed

    2004-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.

  16. A three-dimensional muscle activity imaging technique for assessing pelvic muscle function

    NASA Astrophysics Data System (ADS)

    Zhang, Yingchun; Wang, Dan; Timm, Gerald W.

    2010-11-01

    A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.

  17. Forward model with space-variant of source size for reconstruction on X-ray radiographic image

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan

    2018-03-01

    The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.

  18. SU-D-12A-07: Optimization of a Moving Blocker System for Cone-Beam Computed Tomography Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Yan, H; Jia, X

    2014-06-01

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less

  19. SU-F-18C-13: Low-Dose X-Ray CT Reconstruction Using a Hybrid First-Order Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L; Lin, W; Jin, M

    2014-06-15

    Purpose: To develop a novel reconstruction method for X-ray CT that can lead to accurate reconstruction at significantly reduced dose levels combining low X-ray incident intensity and few views of projection data. Methods: The noise nature of the projection data at low X-ray incident intensity was modeled and accounted by the weighted least-squares (WLS) criterion. The total variation (TV) penalty was used to mitigate artifacts caused by few views of data. The first order primal-dual (FOPD) algorithm was used to minimize TV in image domain, which avoided the difficulty of the non-smooth objective function. The TV penalized WLS reconstruction wasmore » achieved by alternated FOPD TV minimization and projection onto convex sets (POCS) for data fidelity constraints. The proposed FOPD-POCS method was evaluated using the FORBILD jaw phantom and the real cadaver head CT data. Results: The quantitative measures, root mean square error (RMSE) and contrast-to-noise ratio (CNR), demonstrate the superior denoising capability of WLS over LS-based TV iterative reconstruction. The improvement of RMSE (WLS vs. LS) is 15%∼21% and that of CNR is 17%∼72% when the incident counts per ray are ranged from 1×10{sup 5} to 1×10{sup 3}. In addition, the TV regularization can accurately reconstruct images from about 50 views of the jaw phantom. The FOPD-POCS reconstruction reveals more structural details and suffers fewer artifacts in both the phantom and real head images. The FOPD-POCS method also shows fast convergence at low X-ray incident intensity. Conclusion: The new hybrid FOPD-POCS method, based on TV penalized WLS, yields excellent image quality when the incident X-ray intensity is low and the projection views are limited. The reconstruction is computationally efficient since the FOPD minimization of TV is applied only in the image domain. The characteristics of FOPD-POCS can be exploited to significantly reduce radiation dose of X-ray CT without compromising accuracy for diagnosis or treatment planning.« less

  20. Energy-efficient ECG compression on wireless biosensors via minimal coherence sensing and weighted ℓ₁ minimization reconstruction.

    PubMed

    Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing

    2015-03-01

    Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.

  1. Impact of reconstruction strategies on system performance measures : maximizing safety and mobility while minimizing life-cycle costs : final report, December 8, 2008.

    DOT National Transportation Integrated Search

    2008-12-08

    The objective of this research is to develop a general methodological framework for planning and : evaluating the effectiveness of highway reconstruction strategies on the systems performance : measures, in particular safety, mobility, and the tot...

  2. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  3. Minimally Invasive Implantation of HeartWare Assist Device and Simultaneous Tricuspid Valve Reconstruction Through Partial Upper Sternotomy.

    PubMed

    Hillebrand, Julia; Hoffmeier, Andreas; Djie Tiong Tjan, Tonny; Sindermann, Juergen R; Schmidt, Christoph; Martens, Sven; Scherer, Mirela

    2017-05-01

    Left ventricular assist device (LVAD) implantation is a well-established therapy to support patients with end-stage heart failure. However, the operative procedure is associated with severe trauma. Third generation LVADs like the HeartWare assist device (HeartWare, Inc., Framingham, MA, USA) are characterized by enhanced technology despite smaller size. These devices offer new minimally invasive surgical options. Tricuspid regurgitation requiring valve repair is frequent in patients with the need for mechanical circulatory support as it is strongly associated with ischemic and nonischemic cardiomyopathy. We report on HeartWare LVAD implantation and simultaneous tricuspid valve reconstruction through minimally invasive access by partial upper sternotomy to the fifth left intercostal space. Four male patients (mean age 51.72 ± 11.95 years) suffering from chronic heart failure due to dilative (three patients) and ischemic (one patient) cardiomyopathy and also exhibiting concomitant tricuspid valve insufficiency due to annular dilation underwent VAD implantation and tricuspid valve annuloplasty. Extracorporeal circulation was established via the ascending aorta, superior vena cava, and right atrium. In all four cases the LVAD implantation and tricuspid valve repair via partial median sternotomy was successful. During the operative procedure, no conversion to full sternotomy was necessary. One patient needed postoperative re-exploration because of pericardial effusion. No postoperative focal neurologic injury was observed. New generation VADs are advantageous because of the possibility of minimally invasive implantation procedure which can therefore minimize surgical trauma. Concomitant tricuspid valve reconstruction can also be performed simultaneously through partial upper sternotomy. Nevertheless, minimally invasive LVAD implantation is a challenging operative technique. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  4. High Precision Linear And Circular Polarimetry. Sources With Stable Stokes Q,U & V In The Ghz Regime

    NASA Astrophysics Data System (ADS)

    Myserlis, Ioannis; Angelakis, E.; Zensus, J. A.

    2017-10-01

    We present a novel data analysis pipeline for the reconstruction of the linear and circular polarization parameters of radio sources. It includes several correction steps to minimize the effect of instrumental polarization, allowing the detection of linear and circular polarization degrees as low as 0.3 %. The instrumental linear polarization is corrected across the whole telescope beam and significant Stokes Q and U can be recovered even when the recorded signals are severely corrupted. The instrumental circular polarization is corrected with two independent techniques which yield consistent Stokes V results. The accuracy we reach is of the order of 0.1-0.2 % for the polarization degree and 1\\u00ba for the angle. We used it to recover the polarization of around 150 active galactic nuclei that were monitored monthly between 2010.6 and 2016.3 with the Effelsberg 100-m telescope. We identified sources with stable polarization parameters that can be used as polarization standards. Five sources have stable linear polarization; three are linearly unpolarized; eight have stable polarization angle; and 11 sources have stable circular polarization, four of which with non-zero Stokes V.

  5. DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.

    PubMed

    Chen, Zhuo; Luo, Yi; Mesgarani, Nima

    2017-03-01

    Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.

  6. Clinical Outcomes and Return to Sports in Patients with Chronic Achilles Tendon Rupture after Minimally Invasive Reconstruction with Semitendinosus Tendon Graft Transfer.

    PubMed

    Usuelli, Federico Giuseppe; D'Ambrosi, Riccardo; Manzi, Luigi; Indino, Cristian; Villafañe, Jorge Hugo; Berjano, Pedro

    2017-12-01

    Objective  The purpose of the study is to evaluate the clinical results and return to sports in patients undergoing reconstruction of the Achilles tendon after minimally invasive reconstruction with semitendinosus tendon graft transfer. Methods  Eight patients underwent surgical reconstruction with a minimally invasive technique and tendon graft augmentation with ipsilateral semitendinosus tendon for chronic Achilles tendon rupture (more than 30 days after the injury and a gap of >6 cm). Patients were evaluated at a minimum follow-up of 24 months after the surgery through the American Orthopaedic Foot and Ankle Society (AOFAS), the Achilles Tendon Total Rupture Scores (ATRS), the Endurance test, the calf circumference of the operated limb, and the contralateral and the eventual return to sports activity performed before the trauma. Results  The mean age at surgery was 50.5 years. Five men and three women underwent the surgery. The average AOFAS was 92, mean Endurance test was 28.1, and the average ATRS was 87. All patients returned to their daily activities, and six out of eight patients have returned to sports activities prior to the accident (two football players, three runners, one tennis player) at a mean of 7.0 (range: 6.7-7.2) months after the surgery. No patient reported complications or reruptures. Conclusion  Our study confirms encouraging results for the treatment of Achilles tendon rupture with a minimally invasive technique with semitendinosus graft augmentation. The technique can be considered safe and allows patients to return to their sports activity. Level of Evidence  Level IV, therapeutic case series.

  7. A feature refinement approach for statistical interior CT reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong

    2016-07-01

    Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.

  8. A feature refinement approach for statistical interior CT reconstruction.

    PubMed

    Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong

    2016-07-21

    Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.

  9. Preoperative planning of left-sided valve surgery with 3D computed tomography reconstruction models: sternotomy or a minimally invasive approach?

    PubMed

    Heuts, Samuel; Maessen, Jos G; Sardari Nia, Peyman

    2016-05-01

    With the emergence of a new concept aimed at individualization of patient care, the focus will shift from whether a minimally invasive procedure is better than conventional treatment, to the question of which patients will benefit most from which technique? The superiority of minimally invasive valve surgery (MIVS) has not yet been proved. We believe that through better patient selection advantages of this technique can become more pronounced. In our current study, we evaluate the feasibility of 3D computed tomography (CT) imaging reconstruction in the preoperative planning of patients referred for MIVS. We retrospectively analysed all consecutive patients who were referred for minimally invasive mitral valve surgery (MIMVS) and minimally invasive aortic valve replacement (MIAVR) to a single surgeon in a tertiary referral centre for MIVS between March 2014 and 2015. Prospective preoperative planning was done for all patients and was based on evaluations by a multidisciplinary heart-team, an echocardiography, conventional CT images and 3D CT reconstruction models. A total of 39 patients were included in our study; 16 for mitral valve surgery (MVS) and 23 patients for aortic valve replacement (AVR). Eleven patients (69%) within the MVS group underwent MIMVS. Five patients (31%) underwent conventional MVS. Findings leading to exclusion for MIMVS were a tortuous or slender femoro-iliac tract, calcification of the aortic bifurcation, aortic elongation and pericardial calcifications. Furthermore, 2 patients had a change of operative strategy based on preoperative planning. Seventeen (74%) patients in the AVR group underwent MIAVR. Six patients (26%) underwent conventional AVR. Indications for conventional AVR instead of MIAVR were an elongated ascending aorta, ascending aortic calcification and ascending aortic dilatation. One patient (6%) in the MIAVR group was converted to a sternotomy due to excessive intraoperative bleeding. Two mortalities were reported during conventional MVS. There were no mortalities reported in the MIMVS, MIAVR or conventional AVR group. Preoperative planning of minimally invasive left-sided valve surgery with 3D CT reconstruction models is a useful and feasible method to determine operative strategy and exclude patients ineligible for a minimally invasive approach, thus potentially preventing complications. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  10. A protocol for generating a high-quality genome-scale metabolic reconstruction.

    PubMed

    Thiele, Ines; Palsson, Bernhard Ø

    2010-01-01

    Network reconstructions are a common denominator in systems biology. Bottom-up metabolic network reconstructions have been developed over the last 10 years. These reconstructions represent structured knowledge bases that abstract pertinent information on the biochemical transformations taking place within specific target organisms. The conversion of a reconstruction into a mathematical format facilitates a myriad of computational biological studies, including evaluation of network content, hypothesis testing and generation, analysis of phenotypic characteristics and metabolic engineering. To date, genome-scale metabolic reconstructions for more than 30 organisms have been published and this number is expected to increase rapidly. However, these reconstructions differ in quality and coverage that may minimize their predictive potential and use as knowledge bases. Here we present a comprehensive protocol describing each step necessary to build a high-quality genome-scale metabolic reconstruction, as well as the common trials and tribulations. Therefore, this protocol provides a helpful manual for all stages of the reconstruction process.

  11. A protocol for generating a high-quality genome-scale metabolic reconstruction

    PubMed Central

    Thiele, Ines; Palsson, Bernhard Ø.

    2011-01-01

    Network reconstructions are a common denominator in systems biology. Bottom-up metabolic network reconstructions have developed over the past 10 years. These reconstructions represent structured knowledge-bases that abstract pertinent information on the biochemical transformations taking place within specific target organisms. The conversion of a reconstruction into a mathematical format facilitates myriad computational biological studies including evaluation of network content, hypothesis testing and generation, analysis of phenotypic characteristics, and metabolic engineering. To date, genome-scale metabolic reconstructions for more than 30 organisms have been published and this number is expected to increase rapidly. However, these reconstructions differ in quality and coverage that may minimize their predictive potential and use as knowledge-bases. Here, we present a comprehensive protocol describing each step necessary to build a high-quality genome-scale metabolic reconstruction as well as common trials and tribulations. Therefore, this protocol provides a helpful manual for all stages of the reconstruction process. PMID:20057383

  12. Reconstruction of Nasal Cleft Deformities Using Expanded Forehead Flaps: A Case Series.

    PubMed

    Ramanathan, Manikandhan; Sneha, Pendem; Parameswaran, Ananthnarayanan; Jayakumar, Naveen; Sailer, Hermann F

    2014-12-01

    Reconstruction of the nasal clefts is a challenging task considering the nasal anatomic complexity and their possible association with craniofacial defects. The reconstruction of these defects needs extensive amounts of soft tissue that warrant the use of forehead flaps. Often presence of cranial defects and low hairline compromise the amount of tissue available for reconstruction warrenting tissue expansion. To evaluate the efficacy of tissue expansion in reconstruction of congenital nasal clefts. 9 patients with congenital nasal clefts involving multiple sub units were taken up for nasal reconstruction with expanded forehead flaps. The average amount of expansion needed was 200 ml. The reconstruction was performed in 3 stages. Expanded forehead flaps proved to be best modality for reconstruction providing the skin cover needed for ala, columella and dorsum with minimal scarring at the donor site. Expansion of the forehead flap is a viable option for multiple sub unit reconstruction in congenital nasal cleft deformities.

  13. Genome-Scale Reconstruction and Analysis of the Metabolic Network in the Hyperthermophilic Archaeon Sulfolobus Solfataricus

    PubMed Central

    Ulas, Thomas; Riemer, S. Alexander; Zaparty, Melanie; Siebers, Bettina; Schomburg, Dietmar

    2012-01-01

    We describe the reconstruction of a genome-scale metabolic model of the crenarchaeon Sulfolobus solfataricus, a hyperthermoacidophilic microorganism. It grows in terrestrial volcanic hot springs with growth occurring at pH 2–4 (optimum 3.5) and a temperature of 75–80°C (optimum 80°C). The genome of Sulfolobus solfataricus P2 contains 2,992,245 bp on a single circular chromosome and encodes 2,977 proteins and a number of RNAs. The network comprises 718 metabolic and 58 transport/exchange reactions and 705 unique metabolites, based on the annotated genome and available biochemical data. Using the model in conjunction with constraint-based methods, we simulated the metabolic fluxes induced by different environmental and genetic conditions. The predictions were compared to experimental measurements and phenotypes of S. solfataricus. Furthermore, the performance of the network for 35 different carbon sources known for S. solfataricus from the literature was simulated. Comparing the growth on different carbon sources revealed that glycerol is the carbon source with the highest biomass flux per imported carbon atom (75% higher than glucose). Experimental data was also used to fit the model to phenotypic observations. In addition to the commonly known heterotrophic growth of S. solfataricus, the crenarchaeon is also able to grow autotrophically using the hydroxypropionate-hydroxybutyrate cycle for bicarbonate fixation. We integrated this pathway into our model and compared bicarbonate fixation with growth on glucose as sole carbon source. Finally, we tested the robustness of the metabolism with respect to gene deletions using the method of Minimization of Metabolic Adjustment (MOMA), which predicted that 18% of all possible single gene deletions would be lethal for the organism. PMID:22952675

  14. Beard reconstruction: A surgical algorithm.

    PubMed

    Ninkovic, M; Heidekrueger, P I; Ehrl, D; von Spiegel, F; Broer, P N

    2016-06-01

    Facial defects with loss of hair-bearing regions can be caused by trauma, infection, tumor excision, or burn injury. The presented analysis evaluates a series of different surgical approaches with a focus on male beard reconstruction, emphasizing the role of tissue expansion of regional and free flaps. Locoregional and free flap reconstructions were performed in 11 male patients with 14 facial defects affecting the hair-bearing bucco-mandibular or perioral region. In order to minimize donor-site morbidity and obtain large amounts of thin, pliable, hair-bearing tissue, pre-expansion was performed in five of 14 patients. Eight of 14 patients were treated with locoregional flap reconstructions and six with free flap reconstructions. Algorithms regarding pre- and intraoperative decision making are discussed and long-term (mean follow-up 1.5 years) results analyzed. Major complications, including tissue expander infection with the need for removal or exchange, partial or full flap loss, occurred in 0% (0/8) of patients with locoregional flaps and in 17% (1/6) of patients undergoing free flap reconstructions. Secondary refinement surgery was performed in 25% (2/8) of locoregional flaps and in 67% (4/6) of free flaps. Both locoregional and distant tissue transfers play a role in beard reconstruction, while pre-expansion remains an invaluable tool. Paying attention to the presented principles and considering the significance of aesthetic facial subunits, range of motion, aesthetics, and patient satisfaction were improved long term in all our patients while minimizing donor-site morbidity. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  16. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  17. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  18. All-inside, physeal-sparing anterior cruciate ligament reconstruction does not significantly compromise the physis in skeletally immature athletes: a postoperative physeal magnetic resonance imaging analysis.

    PubMed

    Nawabi, Danyal H; Jones, Kristofer J; Lurie, Brett; Potter, Hollis G; Green, Daniel W; Cordasco, Frank A

    2014-12-01

    Anterior cruciate ligament (ACL) reconstruction in skeletally immature patients can result in growth disturbance due to iatrogenic physeal injury. Multiple physeal-sparing ACL reconstruction techniques have been described; however, few combine the benefits of anatomic reconstruction using sockets without violation of the femoral or tibial physis. To utilize physeal-specific magnetic resonance imaging (MRI) to quantify the zone of physeal injury after all-inside ACL reconstruction in skeletally immature athletes. Case series; Level of evidence, 4. Twenty-three skeletally immature patients (mean chronologic age 12.6 years; range, 10-15 years) were prospectively evaluated after all-inside ACL reconstruction. The mean bone age was 13.2 years. There were 8 females and 15 males. Fifteen patients underwent an all-epiphyseal (AE) ACL reconstruction and 8 patients had a partial transphyseal (PTP) ACL reconstruction, which spared the femoral physis but crossed the tibial physis. At 6 and 12 months postoperatively, MRI using 3-dimensional fat-suppressed spoiled gradient recalled echo sequences and full-length standing radiographs were performed to assess graft survival, growth arrest, physeal violation, angular deformity, and leg length discrepancy. The mean follow-up for this cohort was 18.5 months (range, 12-39 months). Minimal tibial physeal violation was seen in 10 of 15 patients in the AE group and, by definition, all patients in the PTP group. The mean area of tibial physeal disturbance (±SD) was 57.8 ± 52.2 mm(2) (mean 2.1% of total physeal area) in the AE group compared with 145.1 ± 100.6 mm(2) (mean 5.4% of total physeal area) in the PTP group (P = .003). Minimal compromise of the femoral physis (1.5%) was observed in 1 case in the PTP group and no cases in the AE group. No cases of growth arrest, articular surface violation, or avascular necrosis were noted on MRI. No postoperative angular deformities or significant leg length discrepancies were observed. The study data suggest that all-inside ACL reconstruction is a safe technique for skeletally immature athletes at short-term follow-up. Physeal-specific MRI reveals minimal growth plate compromise that is significantly lower than published thresholds for growth arrest. © 2014 The Author(s).

  19. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  20. Reconstruction of extended Petri nets from time series data and its application to signal transduction and to gene regulatory networks

    PubMed Central

    2011-01-01

    Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model. PMID:21762503

  1. Simplified projection technique to correct geometric and chromatic lens aberrations using plenoptic imaging.

    PubMed

    Dallaire, Xavier; Thibault, Simon

    2017-04-01

    Plenoptic imaging has been used in the past decade mainly for 3D reconstruction or digital refocusing. It was also shown that this technology has potential for correcting monochromatic aberrations in a standard optical system. In this paper, we present an algorithm for reconstructing images using a projection technique while correcting defects present in it that can apply to chromatic aberrations and wide-angle optical systems. We show that the impact of noise on the reconstruction procedure is minimal. Trade-offs between the sampling of the optical system needed for characterization and image quality are presented. Examples are shown for aberrations in a classic optical system and for chromatic aberrations. The technique is also applied to a wide-angle full field of view of 140° (FFOV 140°) optical system. This technique could be used in order to further simplify or minimize optical systems.

  2. A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.

    PubMed

    Demircan-Tureyen, Ezgi; Kamasak, Mustafa E

    2015-01-01

    Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.

  3. A Model of Regularization Parameter Determination in Low-Dose X-Ray CT Reconstruction Based on Dictionary Learning

    PubMed Central

    Zhang, Cheng; Zhang, Tao; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui

    2015-01-01

    In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time. PMID:26550024

  4. Adjuncts to Improve Nasal Reconstruction Results.

    PubMed

    Gordon, Shayna Lee; Hurst, Eva A

    2017-02-01

    The final cosmetic appearance of nasal reconstruction scars is of paramount importance to both the patient and surgeon. Ideal postreconstruction nasal scars are flat and indistinguishable from surrounding skin. Unfortunately, even with meticulous surgical execution, nasal scars can occasionally be suboptimal. Abnormal fibroblast response can lead to hypertrophic nasal scars, and excessive angiogenesis may lead to telangiectasias or an erythematous scar. Imperfect surgical closure or poor postoperative management can lead to surgical outcomes with step-offs, depressions, suture marks, or dyspigmentation. Aesthetically unacceptable nasal scars can cause pruritus, tenderness, pain, sleep disturbance, and anxiety and depression in postsurgical patients. Fortunately, there are several minimally invasive or noninvasive techniques that allow for enhancement and improvement of cosmetic results with minimal risk and associated downtime. This article provides an overview of adjuncts to improve nasal reconstruction with a focus on techniques to be used in the postoperative period. Armed with an understanding of relevant available therapies, skillful surgeons may drastically improve the final cosmesis and outcome of nasal reconstruction scars. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  5. Simultaneous reconstruction of emission activity and attenuation coefficient distribution from TOF data, acquired with external transmission source

    NASA Astrophysics Data System (ADS)

    Panin, V. Y.; Aykac, M.; Casey, M. E.

    2013-06-01

    The simultaneous PET data reconstruction of emission activity and attenuation coefficient distribution is presented, where the attenuation image is constrained by exploiting an external transmission source. Data are acquired in time-of-flight (TOF) mode, allowing in principle for separation of emission and transmission data. Nevertheless, here all data are reconstructed at once, eliminating the need to trace the position of the transmission source in sinogram space. Contamination of emission data by the transmission source and vice versa is naturally modeled. Attenuated emission activity data also provide additional information about object attenuation coefficient values. The algorithm alternates between attenuation and emission activity image updates. We also proposed a method of estimation of spatial scatter distribution from the transmission source by incorporating knowledge about the expected range of attenuation map values. The reconstruction of experimental data from the Siemens mCT scanner suggests that simultaneous reconstruction improves attenuation map image quality, as compared to when data are separated. In the presented example, the attenuation map image noise was reduced and non-uniformity artifacts that occurred due to scatter estimation were suppressed. On the other hand, the use of transmission data stabilizes attenuation coefficient distribution reconstruction from TOF emission data alone. The example of improving emission images by refining a CT-based patient attenuation map is presented, revealing potential benefits of simultaneous CT and PET data reconstruction.

  6. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  7. Guidelines and enabling objectives for training primary healthcare providers, gynecologists and obstetric and gynecology residents in Female Pelvic Floor Medicine and Reconstructive Surgery.

    PubMed

    Contreras Ortiz, Oscar; Rizk, Diaa Ee; Falconi, Gabriele; Schreiner, Lucas; Gorbea Chávez, Viridiana

    2017-02-01

    For four decades, the training for fellows in Urogynecology has been defined by taking into account the proposals of the relevant international societies. Primary health care providers and general OB/GYN practitioners could not find validated guidelines for the integration of knowledge in pelvic floor dysfunctions. The FIGO Working Group (FWG) in Pelvic Floor Medicine and Reconstructive Surgery has looked for the consensus of international opinion leaders in order to develop a set of minimal requirements of knowledge and skills in this area. This manuscript is divided into three categories of knowledge and skills, these are: to know, to understand, and to perform in order to offer the patients a more holistic health care in this area. The FWG reached consensus on the minimal requirements of knowledge and skills regarding each of the enabling objectives identified for postgraduate obstetrics and gynecology physicians and for residents in obstetrics and gynecology. Our goal is to propose and validate the basic objectives of minimal knowledge in pelvic floor medicine and reconstructive surgery. Neurourol. Urodynam. 36:514-517, 2017. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  8. The figure-of-eight radix nasi flap for medial canthal defects.

    PubMed

    Seyhan, Tamer

    2010-09-01

    Basal cell carcinomas commonly involve the medial canthal region and reconstruction of medial canthal defects is a challenging problem in reconstructive surgery. A new axial pattern flap raised from radix nasi region has been successfully used for the medial canthal defects in eight patients in figure-of-eight manner. One of the ellipses of the figure of eight is the defect, the other is the radix nasi flap. The radix nasi flap with a dimension up to 25 mm is transposed to the defect based either on ipsilateral anastomosis of the dorsal nasal artery with angular artery (AA) or with the connection of its source artery (i.e. ophthalmic artery) if the AA is damaged. All flaps survived and no tumour recurrence was observed. The donor sites were closed primarily and hidden at the radix nasi crease in all cases. The radix nasi flap in figure-of-eight fashion is good alternative for defects of the medial canthal area in terms of attaining a suitable colour and texture and minimal surgical scars. Copyright 2009 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  9. Characterization of spatial and spectral resolution of a rotating prism chromotomographic hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Bostick, Randall L.; Perram, Glen P.; Tuttle, Ronald

    2009-05-01

    The Air Force Institute of Technology (AFIT) has built a rotating prism chromotomographic hyperspectral imager (CTI) with the goal of extending the technology to exploit spatially extended sources with quickly varying (> 10 Hz) phenomenology, such as bomb detonations and muzzle flashes. This technology collects successive frames of 2-D data dispersed at different angles multiplexing spatial and spectral information which can then be used to reconstruct any arbitrary spectral plane(s). In this paper, the design of the AFIT instrument is described and then tested against a spectral target with near point source spatial characteristics to measure spectral and spatial resolution. It will be shown that, in theory, the spectral and spatial resolution in the 3-D spectral image cube is the nearly the same as a simple prism spectrograph with the same design. However, error in the knowledge of the prism linear dispersion at the detector array as a function of wavelength and projection angle will degrade resolution without further corrections. With minimal correction for error and use of a simple shift-and-add reconstruction algorithm, the CTI is able to produce a spatial resolution of about 2 mm in the object plane (234 μrad IFOV) and is limited by chromatic aberration. A spectral resolution of less than 1nm at shorter wavelengths is shown, limited primarily by prism dispersion.

  10. Using Perturbative Least Action to Reconstruct Redshift-Space Distortions

    NASA Astrophysics Data System (ADS)

    Goldberg, David M.

    2001-05-01

    In this paper, we present a redshift-space reconstruction scheme that is analogous to and extends the perturbative least action (PLA) method described by Goldberg & Spergel. We first show that this scheme is effective in reconstructing even nonlinear observations. We then suggest that by varying the cosmology to minimize the quadrupole moment of a reconstructed density field, it may be possible to lower the error bars on the redshift distortion parameter, β, as well as to break the degeneracy between the linear bias parameter, b, and ΩM. Finally, we discuss how PLA might be applied to realistic redshift surveys.

  11. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    PubMed Central

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  12. Alignment theory of parallel-beam computed tomography image reconstruction for elastic-type objects using virtual focusing method.

    PubMed

    Jun, Kyungtaek; Kim, Dongwook

    2018-01-01

    X-ray computed tomography has been studied in various fields. Considerable effort has been focused on reconstructing the projection image set from a rigid-type specimen. However, reconstruction of images projected from an object showing elastic motion has received minimal attention. In this paper, a mathematical solution to reconstructing the projection image set obtained from an object with specific elastic motions-periodically, regularly, and elliptically expanded or contracted specimens-is proposed. To reconstruct the projection image set from expanded or contracted specimens, methods are presented for detection of the sample's motion modes, mathematical rescaling of pixel values, and conversion of the projection angle for a common layer.

  13. SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, J; Gao, H

    2015-06-15

    Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of opticalmore » coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the acoustic data. Jing Feng and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  14. 75 FR 21577 - Rogue River-Siskiyou National Forest, Powers Ranger District, Coos County, OR; Eden Ridge Timber...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-26

    ... on soil, slope and hydrological concerns. New system road construction, reconstruction of... natural succession processes. The residual trees would have less competition for sunlight, water and soil... designed to: Minimize soil impacts (erosion, compaction and/or displacement); Minimize damage to residual...

  15. The Morava E-theories of finite general linear groups

    NASA Astrophysics Data System (ADS)

    Mattafirri, Sara

    The feasibility of producing an image of radioactivity distribution within a patient or confined region of space using information carried by the gamma-rays emitted from the source is investigated. The imaging approach makes use of parameters related to the gamma-rays which undergo Compton scattering within a detection system, it does not involve the use of pin-holes, and it employs gamma-rays of energy ranging from a few hundreds of keVs to MeVs. Energy range of the photons and absence of pin-holes aim to provide larger pool of radioisotopes and larger efficiency than other emission imaging modalities, such as single photon emission computed tomography and positron emission tomography, making it possible to investigate larger pool of functions and smaller radioactivity doses. The observables available to produce the image are the gamma-ray position of interaction and energy deposition during Compton scattering within the detection systems. Image reconstruction methodologies such as backprojection and list-mode maximum likelihood expectation maximization algorithm are characterized and applied to produce images of simulated and experimental sources on the basis of the observed parameters. Given the observables and image reconstruction methodologies, imaging systems based on minimizing the variation of the impulse response with position within the field of view are developed. The approach allows imaging of three-dimensional sources when an imaging system which provides full 4 pi view of the object is used and imaging of two-dimensional sources when a single block-type detector which provides one view of the object is used. Geometrical resolution of few millimeters is obtained at few centimeters from the detection system if employing gamma-rays of energy in the order of few hundreds of keVs and current state of the art semi-conductor detectors; At this level of resolution, detection efficiency is in the order of 10-3 at few centimeters from the detector when a single block detector few centimeters in size is used. The resolution significantly improves with increasing energy of the photons and it degrades roughly linearly with increasing distance from the detector; Larger detection efficiency can be obtained at the expenses of resolution or via targeted configurations of the detector. Results pave the way for image reconstruction of practical gamma-ray emitting sources.

  16. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  17. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  18. Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)

    DTIC Science & Technology

    2013-02-01

    which is based on the iterative coordinate descent (ICD), works by constructing a substitute to the original cost4 at every point, and minimizing this...using Beer’s law. Thus the projection integral corresponding to the ith measurement is given by log ( λD λi ) . There can be cases in which the dosage λD...Inputs: Measurements g, Initial reconstruction f ′, Initial dosage d′, Fraction of entries to reject R %Outputs: Reconstruction f̂ and dosage parameter d̂

  19. 40 CFR 63.844 - Emission limits for new or reconstructed sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.844 Emission limits for new or reconstructed sources. (a) Potlines. The owner or...

  20. 40 CFR 63.844 - Emission limits for new or reconstructed sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.844 Emission limits for new or reconstructed sources. (a) Potlines. The owner or...

  1. 40 CFR 63.844 - Emission limits for new or reconstructed sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.844 Emission limits for new or reconstructed sources. (a) Potlines. The owner or...

  2. 40 CFR 63.844 - Emission limits for new or reconstructed sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.844 Emission limits for new or reconstructed sources. (a) Potlines. The owner or...

  3. Sensitivity of a Bayesian atmospheric-transport inversion model to spatio-temporal sensor resolution applied to the 2006 North Korean nuclear test

    NASA Astrophysics Data System (ADS)

    Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.

    2017-12-01

    Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.

  4. Perforator Peroneal Artery Flap for Tongue Reconstruction.

    PubMed

    Chauhan, Shubhra; Chavre, Sachin; Chandrashekar, Naveen Hedne; B S, Naveen

    2017-03-01

    Reconstruction has evolved long way from primary closure to flaps. As time evolved, better understanding of vascularity of flap has led to the development of innovative reconstructive techniques. These flaps can be raised from various parts of the body for reconstruction and have shown least donor site morbidity. We use one such peroneal artery perforator flap for tongue reconstruction with advantage of thin pliable flap, minimal donor site morbidity and hidden scar. Our patient 57yrs old lady underwent wide local excision with selective neck dissection. Perforators are marked about 10 and 15 cm inferiorly from the fibular head using hand held Doppler. Leg is positioned in such a way to give better exposure during dissection of the flap and flap is harvested under a tourniquet with pressure kept 350 mm Hg. The perforator is kept at the eccentric location, so as to gain length of the pedicle. Skin incison is placed over the peroneal muscle and deepened unto the deep facia, then the dissection is continued over the muscle and the perforator arising from the lateral septum. The proximal perforator about 10 cm from the fibular head is a constant perforator and bigger one, which is traced up to the peroneal vessel. We could get a 6 cm of pedicle length. Finally the flap is islanded on this perforator and the pedicle is ligated and flap harvested. Anastamosis was done to the ipsilateral side to facial vessels. The donor site is closed primarily and in the upper half one can harvest 5 cm width flap without requiring a skin graft along with a length of 8 to 12 cm. Various local and free flap has been used for reconstruction of partial tongue defects with its obvious donor site problems, like less pliable skin and not so adequate tissue from local flaps and sacrificing a important artery as in radial forearm flap serves as the work horse in reconstruction of partial tongue defects, Concept of super microsurgery was popularized by Japanese in 1980s and the concept of angiosome proposed by Taylor paved the way for development of new flaps. True perforator flaps are those where the source vessel is left undisturbed and overlying skin flap is raised. Yoshimura proposed cutaneous flap could be raised from peroneal artery (Br J Plast Surg 42:715-718, 1989). Wolff et al. (Plast Reconstr Surg 113:107-113, 2004) first used perforator based peroneal artery flap for oral reconstruction. Location of perforators vary, hence pre operative localisation can be done by ultrasound doppler, CT angio or MR angiography. Disadvantages over radial flap include varying anatomic location of perforators, need for imaging and difficult dissection of delicate vessels through muscles and hence a learning curve. Our patient had an arterial thrombus within few hours post-operatively which was successfully salvaged with immediate re-exploration and re-anastomosis of artery. Post-operative healing was uneventful and donor site was closed primarily without the need for graft. Perforator peroneal flap serves as a useful armamentarium for reconstruction of moderate size defects of tongue, buccal mucosa and floor of mouth with advantages of thin pliable flap, minimal donor site morbidity and hidden scar.

  5. Immediate Implant-based Prepectoral Breast Reconstruction Using a Vertical Incision

    PubMed Central

    Lind, Jeffrey G.; Hopkins, Elizabeth G.

    2015-01-01

    Background: Ideally, breast reconstruction is performed at the time of mastectomy in a single stage with minimal scarring. However, postoperative complications with direct-to-implant subpectoral reconstruction remain significant. These include asymmetry, flap necrosis, animation deformity, and discomfort. We report on a series of patients who have undergone immediate single-stage prepectoral, implant-based breast reconstruction with a smooth, adjustable saline implant covered with mesh/acellular dermal matrix for support using a vertical mastectomy incision. This technique, when combined with an adjustable implant, addresses the complications related to subpectoral implant placement of traditional expanders. Our follow-up time, 4.6 years (55 months), shows a low risk of implant loss and elimination of animation deformity while also providing patients with a safe and aesthetically pleasing result. Methods: All patients who underwent immediate implant-based prepectoral breast reconstruction using a vertical mastectomy incision as a single-staged procedure were included. Charts were reviewed retrospectively. Adjustable smooth round saline implants and mesh/acellular dermal matrix were used for fixation in all cases. Results: Thirty-one patients (62 breasts) underwent single-staged implant-based prepectoral breast reconstruction using a vertical mastectomy incision. Postoperative complications occurred in 9 patients, 6 of which were resolved with postoperative intervention while only 2 cases resulted in implant loss. Conclusions: There can be significant morbidity associated with traditional subpectoral implant-based breast reconstruction. As an alternative, the results of this study show that an immediate single-stage prepectoral breast reconstruction with a smooth saline adjustable implant, using a vertical incision, in conjunction with mesh/matrix support can be performed with excellent aesthetic outcomes and minimal complications. PMID:26180713

  6. Inlining 3d Reconstruction, Multi-Source Texture Mapping and Semantic Analysis Using Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.

    2016-06-01

    This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and conditionally grown, fused and filtered morphologically. The output polygons are vectorized and reintegrated into the previously reconstructed buildings by sparsely ray-tracing their vertices. Finally the enhanced 3D models get stored as textured geometry for visualization and semantically annotated "LOD-2.5" CityGML objects for GIS applications.

  7. Long-term Follow-up with AlloDerm in Breast Reconstruction

    PubMed Central

    2013-01-01

    Summary: Little is known about the long-term fate of acellular dermal matrices in breast implant surgery. A 12-year follow-up case with tissue analysis of AlloDerm in revision breast reconstruction reveals retention of graft volume and integration with an organized collagen structure, minimal capsule formation, and little or no indication of inflammation. PMID:25289211

  8. Long-term Follow-up with AlloDerm in Breast Reconstruction.

    PubMed

    Baxter, Richard A

    2013-05-01

    Little is known about the long-term fate of acellular dermal matrices in breast implant surgery. A 12-year follow-up case with tissue analysis of AlloDerm in revision breast reconstruction reveals retention of graft volume and integration with an organized collagen structure, minimal capsule formation, and little or no indication of inflammation.

  9. Study of a MEMS-based Shack-Hartmann wavefront sensor with adjustable pupil sampling for astronomical adaptive optics.

    PubMed

    Baranec, Christoph; Dekany, Richard

    2008-10-01

    We introduce a Shack-Hartmann wavefront sensor for adaptive optics that enables dynamic control of the spatial sampling of an incoming wavefront using a segmented mirror microelectrical mechanical systems (MEMS) device. Unlike a conventional lenslet array, subapertures are defined by either segments or groups of segments of a mirror array, with the ability to change spatial pupil sampling arbitrarily by redefining the segment grouping. Control over the spatial sampling of the wavefront allows for the minimization of wavefront reconstruction error for different intensities of guide source and different atmospheric conditions, which in turn maximizes an adaptive optics system's delivered Strehl ratio. Requirements for the MEMS devices needed in this Shack-Hartmann wavefront sensor are also presented.

  10. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  11. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  12. TH-E-17A-06: Anatomical-Adaptive Compressed Sensing (AACS) Reconstruction for Thoracic 4-Dimensional Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, C; Kipritidis, J; OBrien, R

    2014-06-15

    Purpose: The Feldkamp-Davis-Kress (FDK) algorithm currently used for clinical thoracic 4-dimensional (4D) cone-beam CT (CBCT) reconstruction suffers from noise and streaking artifacts due to projection under-sampling. Compressed sensing theory enables reconstruction of under-sampled datasets via total-variation (TV) minimization, but TV-minimization algorithms such as adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS) often converge slowly and are prone to over-smoothing anatomical details. These disadvantages can be overcome by incorporating general anatomical knowledge via anatomy segmentation. Based on this concept, we have developed an anatomical-adaptive compressed sensing (AACS) algorithm for thoracic 4D-CBCT reconstruction. Methods: AACS is based on the ASD-POCS framework, where each iteration consists of a TV-minimizationmore » step and a data fidelity constraint step. Prior to every AACS iteration, four major thoracic anatomical structures - soft tissue, lungs, bony anatomy, and pulmonary details - were segmented from the updated solution image. Based on the segmentation, an anatomical-adaptive weighting was applied to the TV-minimization step, so that TV-minimization was enhanced at noisy/streaky regions and suppressed at anatomical structures of interest. The image quality and convergence speed of AACS was compared to conventional ASD-POCS using an XCAT digital phantom and a patient scan. Results: For the XCAT phantom, the AACS image represented the ground truth better than the ASD-POCS image, giving a higher structural similarity index (0.93 vs. 0.84) and lower absolute difference (1.1*10{sup 4} vs. 1.4*10{sup 4}). For the patient case, while both algorithms resulted in much less noise and streaking than FDK, the AACS image showed considerably better contrast and sharpness of the vessels, tumor, and fiducial marker than the ASD-POCS image. In addition, AACS converged over 50% faster than ASD-POCS in both cases. Conclusions: The proposed AACS algorithm was shown to reconstruct thoracic 4D-CBCT images more accurately and with faster convergence compared to ASD-POCS. The superior image quality and rapid convergence makes AACS promising for future clinical use.« less

  13. A simple derivation and analysis of a helical cone beam tomographic algorithm for long object imaging via a novel definition of region of interest

    NASA Astrophysics Data System (ADS)

    Hu, Jicun; Tam, Kwok; Johnson, Roger H.

    2004-01-01

    We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom.

  14. Superresolution Interferometric Imaging with Sparse Modeling Using Total Squared Variation: Application to Imaging the Black Hole Shadow

    NASA Astrophysics Data System (ADS)

    Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki

    2018-05-01

    We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.

  15. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.

    PubMed

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-03-01

    The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.

  16. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity II: Applications

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Lange, Jacob; Healy, James; Carlos, Lousto; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark

    2016-03-01

    In this talk, we apply a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. We illustrate how to use only comparisons between synthetic data and these simulations to reconstruct properties of a synthetic candidate source. We demonstrate using selected examples that we can reconstruct posterior distributions obtained by other Bayesian methods with our sparse grid. We describe how followup simulations can corroborate and improve our understanding of a candidate signal.

  17. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  18. Artifact reduction in short-scan CBCT by use of optimization-based reconstruction

    PubMed Central

    Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y; Pan, Xiaochuan

    2017-01-01

    Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp–Davis–Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration. PMID:27046218

  19. Investigation of iterative image reconstruction in low-dose breast CT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Yang, Kai; Boone, John M.; Han, Xiao; Sidky, Emil Y.; Pan, Xiaochuan

    2014-06-01

    There is interest in developing computed tomography (CT) dedicated to breast-cancer imaging. Because breast tissues are radiation-sensitive, the total radiation exposure in a breast-CT scan is kept low, often comparable to a typical two-view mammography exam, thus resulting in a challenging low-dose-data-reconstruction problem. In recent years, evidence has been found that suggests that iterative reconstruction may yield images of improved quality from low-dose data. In this work, based upon the constrained image total-variation minimization program and its numerical solver, i.e., the adaptive steepest descent-projection onto the convex set (ASD-POCS), we investigate and evaluate iterative image reconstructions from low-dose breast-CT data of patients, with a focus on identifying and determining key reconstruction parameters, devising surrogate utility metrics for characterizing reconstruction quality, and tailoring the program and ASD-POCS to the specific reconstruction task under consideration. The ASD-POCS reconstructions appear to outperform the corresponding clinical FDK reconstructions, in terms of subjective visualization and surrogate utility metrics.

  20. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction.

    PubMed

    Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc

    2017-11-01

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE PAGES

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...

    2017-07-03

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  2. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  3. C-arm based cone-beam CT using a two-concentric-arc source trajectory: system evaluation

    NASA Astrophysics Data System (ADS)

    Zambelli, Joseph; Zhuang, Tingliang; Nett, Brian E.; Riddell, Cyril; Belanger, Barry; Chen, Guang-Hong

    2008-03-01

    The current x-ray source trajectory for C-arm based cone-beam CT is a single arc. Reconstruction from data acquired with this trajectory yields cone-beam artifacts for regions other than the central slice. In this work we present the preliminary evaluation of reconstruction from a source trajectory of two concentric arcs using a flat-panel detector equipped C-arm gantry (GE Healthcare Innova 4100 system, Waukesha, Wisconsin). The reconstruction method employed is a summation of FDK-type reconstructions from the two individual arcs. For the angle between arcs studied here, 30°, this method offers a significant reduction in the visibility of cone-beam artifacts, with the additional advantages of simplicity and ease of implementation due to the fact that it is a direct extension of the reconstruction method currently implemented on commercial systems. Reconstructed images from data acquired from the two arc trajectory are compared to those reconstructed from a single arc trajectory and evaluated in terms of spatial resolution, low contrast resolution, noise, and artifact level.

  4. C-arm based cone-beam CT using a two-concentric-arc source trajectory: system evaluation.

    PubMed

    Zambelli, Joseph; Zhuang, Tingliang; Nett, Brian E; Riddell, Cyril; Belanger, Barry; Chen, Guang-Hong

    2008-01-01

    The current x-ray source trajectory for C-arm based cone-beam CT is a single arc. Reconstruction from data acquired with this trajectory yields cone-beam artifacts for regions other than the central slice. In this work we present the preliminary evaluation of reconstruction from a source trajectory of two concentric arcs using a flat-panel detector equipped C-arm gantry (GE Healthcare Innova 4100 system, Waukesha, Wisconsin). The reconstruction method employed is a summation of FDK-type reconstructions from the two individual arcs. For the angle between arcs studied here, 30°, this method offers a significant reduction in the visibility of cone-beam artifacts, with the additional advantages of simplicity and ease of implementation due to the fact that it is a direct extension of the reconstruction method currently implemented on commercial systems. Reconstructed images from data acquired from the two arc trajectory are compared to those reconstructed from a single arc trajectory and evaluated in terms of spatial resolution, low contrast resolution, noise, and artifact level.

  5. Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

    DOE PAGES

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.; ...

    2017-08-09

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  6. A Note on Alternating Minimization Algorithm for the Matrix Completion Problem

    DOE PAGES

    Gamarnik, David; Misra, Sidhant

    2016-06-06

    Here, we consider the problem of reconstructing a low-rank matrix from a subset of its entries and analyze two variants of the so-called alternating minimization algorithm, which has been proposed in the past.We establish that when the underlying matrix has rank one, has positive bounded entries, and the graph underlying the revealed entries has diameter which is logarithmic in the size of the matrix, both algorithms succeed in reconstructing the matrix approximately in polynomial time starting from an arbitrary initialization.We further provide simulation results which suggest that the second variant which is based on the message passing type updates performsmore » significantly better.« less

  7. Anesthesia for minimally invasive chest wall reconstructive surgeries: Our experience and review of literature

    PubMed Central

    Shah, Shagun Bhatia; Hariharan, Uma; Bhargava, Ajay Kumar; Darlong, Laleng M.

    2017-01-01

    Minimal access procedures have revolutionized the field of surgery and opened newer challenges for the anesthesiologists. Pectus carinatum or pigeon chest is an uncommon chest wall deformity characterized by a protruding breast bone (sternum) and ribs caused by an overgrowth of the costal cartilages. It can cause a multitude of problems, including severe pain from an intercostal neuropathy, respiratory dysfunction, and psychologic issues from the cosmetic disfigurement. Pulmonary function indices, namely, forced expiratory volume over 1 s, forced vital capacity, vital capacity, and total lung capacity are markedly compromised in pectus excavatum. Earlier, open surgical correction in the form of the Ravitch procedure was followed. Currently, in the era of minimally invasive surgery, Nuss technique (pectus bar procedure) is a promising step in chest wall reconstructive surgery for pectus excavatum. Reverse Nuss is a corrective, minimally invasive surgery for pectus carinatum chest deformity. A tailor-made anesthetic technique for this new procedure has been described here based on the authors’ personal experience and thorough review of literature based on Medline, Embase, and Scopus databases search. PMID:28757834

  8. Quadriceps Tendon Autograft Medial Patellofemoral Ligament Reconstruction.

    PubMed

    Fink, Christian; Steensen, Robert; Gföller, Peter; Lawton, Robert

    2018-06-01

    Critically evaluate the published literature related to quadriceps tendon (QT) medial patellofemoral ligament (MPFL) reconstruction. Hamstring tendon (HT) MPFL reconstruction techniques have been shown to successfully restore patella stability, but complications including patella fracture are reported. Quadriceps tendon (QT) reconstruction techniques with an intact graft pedicle on the patella side have the advantage that patella bone tunnel drilling and fixation are no longer needed, reducing risk of patella fracture. Several QT MPFL reconstruction techniques, including minimally invasive surgical (MIS) approaches, have been published with promising clinical results and fewer complications than with HT techniques. Parallel laboratory studies have shown macroscopic anatomy and biomechanical properties of QT are more similar to native MPFL than hamstring (HS) HT, suggesting QT may more accurately restore native joint kinematics. Quadriceps tendon MPFL reconstruction, via both open and MIS techniques, have promising clinical results and offer valuable alternatives to HS grafts for primary and revision MPFL reconstruction in both children and adults.

  9. Arthroscopic-Assisted Triangular Fibrocartilage Complex Reconstruction.

    PubMed

    Chu-Kay Mak, Michael; Ho, Pak-Cheong

    2017-11-01

    Injury of the triangular fibrocartilage complex (TFCC) is a common cause of ulnar-sided wrist pain. Volar and dorsal radioulnar ligaments and their foveal insertion are the most important stabilizing components of the TFCC. In irreparable tears, anatomic reconstruction of the TFCC aims to restore normal biomechanics and stability of the distal radioulnar joint. We proposed a novel arthroscopic-assisted technique using a palmaris longus tendon graft. Arthroscopic-assisted TFCC reconstruction is a safe and effective approach with outcomes comparable to conventional open reconstruction and may result in a better range of motion from minimizing soft tissue dissection and subsequent scarring. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.

    PubMed

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-16

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  11. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution

    NASA Astrophysics Data System (ADS)

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-01

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  12. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    NASA Astrophysics Data System (ADS)

    Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville

    2017-01-01

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.

  13. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  14. Discussion of Source Reconstruction Models Using 3D MCG Data

    NASA Astrophysics Data System (ADS)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  15. 40 CFR Table 1 to Subpart Mmmmm of... - Emission Limits

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Each existing, new, or reconstructed loop slitter adhesive use affected source Not use any HAP-based adhesives. 2. Each new or reconstructed flame lamination affected source Reduce HAP emissions by 90 percent...

  16. A 80-Year Long Coral-Based Temperature Reconstruction for the Last Interglacial from Northern Hispaniola

    NASA Astrophysics Data System (ADS)

    DeLong, K. L.; Ouellette, G., Jr.; Goodkin, N.; Martin, E. R.; Rosendahl, D. H.; Taylor, F. W.; WU, C. C.; Shen, C. C.

    2016-12-01

    The Last Interglacial (LIG; 117-128 ka), when sea level was 6 m higher than today, can serve as an analog for future climate scenarios, yet minimal paleoclimatic information exists with seasonal to decadal resolution. The island of Hispaniola is a particularly desirable site for producing sea surface temperature (SST) reconstructions, as it displays significant correlations with SST and precipitation anomalies for much of the tropical and North Atlantic Ocean, and Hispaniola is located in the northern sector of the Atlantic Warm Pool (AWP), a primary moisture source for precipitation in the Americas. Here we present an early LIG (128,626 ±438 (2σ) years) monthly-resolved coral Sr/Ca-SST reconstruction from a well-preserved Siderastrea siderea subfossil coral spanning 80 years from the northern coast of Hispaniola (19.913ºN, 70.925ºW). We compare our LIG SST reconstruction with coral Sr/Ca-SST from three modern coral microatolls of the same species, the longest spanning 80 years, recovered near Port-au Prince, Haiti (18.479070°N, 72.668659°W) after the 2010 Haiti earthquake, as well as a 125 ka LIG model simulation spanning 300 years. We find similar mean SST for the LIG (27.4ºC) and modern corals (27.9ºC) that is consistent with MIS 5e reconstructions in the tropical oceans (27.3-29.6ºC); however, these reconstructions are warmer than the LIG model mean SST for our study site (25.6ºC). Seasonal variability is similar (1.5ºC LIG, 1.0-1.7ºC modern) consistent with the findings of LIG coral reconstructions using the tropical Atlantic coral Diploria strigosa and with climate model simulations suggesting orbital insolation changes driving LIG seasonality. However, our LIG coral contains decadal variability (1.7-3.1ºC) not evident in the shorter LIG coral reconstructions or modern SST records and coral reconstructions yet are present in the LIG model simulation for our study site. This decadal variability may reflect variations in the northern extent of the AWP on decadal time scales, which may vary trade wind strength, westward moisture transport to the Americas, and precipitation in the Atlantic.

  17. As above, so below? Towards understanding inverse models in BCI

    NASA Astrophysics Data System (ADS)

    Lindgren, Jussi T.

    2018-02-01

    Objective. In brain-computer interfaces (BCI), measurements of the user’s brain activity are classified into commands for the computer. With EEG-based BCIs, the origins of the classified phenomena are often considered to be spatially localized in the cortical volume and mixed in the EEG. We investigate if more accurate BCIs can be obtained by reconstructing the source activities in the volume. Approach. We contrast the physiology-driven source reconstruction with data-driven representations obtained by statistical machine learning. We explain these approaches in a common linear dictionary framework and review the different ways to obtain the dictionary parameters. We consider the effect of source reconstruction on some major difficulties in BCI classification, namely information loss, feature selection and nonstationarity of the EEG. Main results. Our analysis suggests that the approaches differ mainly in their parameter estimation. Physiological source reconstruction may thus be expected to improve BCI accuracy if machine learning is not used or where it produces less optimal parameters. We argue that the considered difficulties of surface EEG classification can remain in the reconstructed volume and that data-driven techniques are still necessary. Finally, we provide some suggestions for comparing approaches. Significance. The present work illustrates the relationships between source reconstruction and machine learning-based approaches for EEG data representation. The provided analysis and discussion should help in understanding, applying, comparing and improving such techniques in the future.

  18. Compartmentalized Low-Rank Recovery for High-Resolution Lipid Unsuppressed MRSI

    PubMed Central

    Bhattacharya, Ipshita; Jacob, Mathews

    2017-01-01

    Purpose To introduce a novel algorithm for the recovery of high-resolution magnetic resonance spectroscopic imaging (MRSI) data with minimal lipid leakage artifacts, from dual-density spiral acquisition. Methods The reconstruction of MRSI data from dual-density spiral data is formulated as a compartmental low-rank recovery problem. The MRSI dataset is modeled as the sum of metabolite and lipid signals, each of which is support limited to the brain and extracranial regions, respectively, in addition to being orthogonal to each other. The reconstruction problem is formulated as an optimization problem, which is solved using iterative reweighted nuclear norm minimization. Results The comparisons of the scheme against dual-resolution reconstruction algorithm on numerical phantom and in vivo datasets demonstrate the ability of the scheme to provide higher spatial resolution and lower lipid leakage artifacts. The experiments demonstrate the ability of the scheme to recover the metabolite maps, from lipid unsuppressed datasets with echo time (TE)=55 ms. Conclusion The proposed reconstruction method and data acquisition strategy provide an efficient way to achieve high-resolution metabolite maps without lipid suppression. This algorithm would be beneficial for fast metabolic mapping and extension to multislice acquisitions. PMID:27851875

  19. Novel approach for tomographic reconstruction of gas concentration distributions in air: Use of smooth basis functions and simulated annealing

    NASA Astrophysics Data System (ADS)

    Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.

    Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.

  20. Visualizing Volcanic Clouds in the Atmosphere and Their Impact on Air Traffic.

    PubMed

    Gunther, Tobias; Schulze, Maik; Friederici, Anke; Theisel, Holger

    2016-01-01

    Volcanic eruptions are not only hazardous in the direct vicinity of a volcano, but they also affect the climate and air travel for great distances. This article sheds light on the Grímsvötn, Puyehue-Cordón Caulle, and Nabro eruptions in 2011. The authors study the agreement of the complementary satellite data, reconstruct sulfate aerosol and volcanic ash clouds, visualize endangered flight routes, minimize occlusion in particle trajectory visualizations, and focus on the main pathways of Nabro's sulfate aerosol into the stratosphere. The results here were developed for the 2014 IEEE Scientific Visualization Contest, which centers around the fusion of multiple satellite data modalities to reconstruct and assess the movement of volcanic ash and sulfate aerosol emissions. Using data from three volcanic eruptions that occurred in the span of approximately three weeks, the authors study the agreement of the complementary satellite data, reconstruct sulfate aerosol and volcanic ash clouds, visualize endangered flight routes, minimize occlusion in particle trajectory visualizations, and focus on the main pathways of sulfate aerosol into the stratosphere. This video provides animations of the reconstructed ash clouds. https://youtu.be/D9DvJ5AvZAs.

  1. Paraspinal Transposition Flap for Reconstruction of Sacral Soft Tissue Defects: A Series of 53 Cases from a Single Institute

    PubMed Central

    Chattopadhyay, Debarati; Agarwal, Akhilesh Kumar; Guha, Goutam; Bhattacharya, Nirjhar; Chumbale, Pawan K; Gupta, Souradip; Murmu, Marang Buru

    2014-01-01

    Study Design Case series. Purpose To describe paraspinal transposition flap for coverage of sacral soft tissue defects. Overview of Literature Soft tissue defects in the sacral region pose a major challenge to the reconstructive surgeon. Goals of sacral wound reconstruction are to provide a durable skin and soft tissue cover adequate for even large sacral defects; minimize recurrence; and minimize donor site morbidity. Various musculocutaneous and fasciocutanous flaps have been described in the literature. Methods The flap was applied in 53 patients with sacral soft tissue defects of diverse etiology. Defects ranged in size from small (6 cm×5 cm) to extensive (21 cm×10 cm). The median age of the patients was 58 years (range, 16-78 years). Results There was no flap necrosis. Primary closure of donor sites was possible in all the cases. The median follow up of the patients was 33 months (range, 4-84 months). The aesthetic outcomes were acceptable. There has been no recurrence of pressure sores. Conclusions The authors conclude that paraspinal transposition flap is suitable for reconstruction of large sacral soft tissue defects with minimum morbidity and excellent long term results. PMID:24967044

  2. Attractor reconstruction for non-linear systems: a methodological note

    USGS Publications Warehouse

    Nichols, J.M.; Nichols, J.D.

    2001-01-01

    Attractor reconstruction is an important step in the process of making predictions for non-linear time-series and in the computation of certain invariant quantities used to characterize the dynamics of such series. The utility of computed predictions and invariant quantities is dependent on the accuracy of attractor reconstruction, which in turn is determined by the methods used in the reconstruction process. This paper suggests methods by which the delay and embedding dimension may be selected for a typical delay coordinate reconstruction. A comparison is drawn between the use of the autocorrelation function and mutual information in quantifying the delay. In addition, a false nearest neighbor (FNN) approach is used in minimizing the number of delay vectors needed. Results highlight the need for an accurate reconstruction in the computation of the Lyapunov spectrum and in prediction algorithms.

  3. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources.

    PubMed

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. EEG data were generated by simulating multiple cortical sources (2-4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.

  4. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources

    PubMed Central

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000

  5. [The use of open source software in graphic anatomic reconstructions and in biomechanic simulations].

    PubMed

    Ciobanu, O

    2009-01-01

    The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.

  6. 40 CFR Table 1b to Subpart Zzzz of... - Operating Limitations for Existing, New, and Reconstructed Spark Ignition 4SRB Stationary RICE...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., and Reconstructed Spark Ignition 4SRB Stationary RICE >500 HP Located at a Major Source of HAP Emissions and Existing Spark Ignition 4SRB Stationary RICE >500 HP Located at an Area Source of HAP... Limitations for Existing, New, and Reconstructed Spark Ignition 4SRB Stationary RICE >500 HP Located at a...

  7. 40 CFR Table 1b to Subpart Zzzz of... - Operating Limitations for Existing, New, and Reconstructed Spark Ignition 4SRB Stationary RICE...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., New, and Reconstructed Spark Ignition 4SRB Stationary RICE >500 HP Located at a Major Source of HAP Emissions and Existing Spark Ignition 4SRB Stationary RICE >500 HP Located at an Area Source of HAP... Limitations for Existing, New, and Reconstructed Spark Ignition 4SRB Stationary RICE >500 HP Located at a...

  8. 40 CFR Table 1a to Subpart Zzzz of... - Emission Limitations for Existing, New, and Reconstructed Spark Ignition, 4SRB Stationary RICE...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., and Reconstructed Spark Ignition, 4SRB Stationary RICE > 500 HP Located at a Major Source of HAP... Limitations for Existing, New, and Reconstructed Spark Ignition, 4SRB Stationary RICE > 500 HP Located at a... stationary RICE >500 HP located at a major source of HAP emissions: For each . . . You must meet the...

  9. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de; Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT; Brookes, Mike

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In thismore » paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole sources. • Inverse solutions are based on longitudinal and transverse line integral measurements. • Transverse line integral measurements are used as a sparsity constraint. • Numerical procedure to approximate the line integrals is described in detail. • Patterns of the studied electric fields are correctly estimated.« less

  10. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    PubMed

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.

  11. Minimal-scan filtered backpropagation algorithms for diffraction tomography.

    PubMed

    Pan, X; Anastasio, M A

    1999-12-01

    The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.

  12. Nerve stepping stone has minimal impact in aiding regeneration across long acellular nerve allografts.

    PubMed

    Yan, Ying; Hunter, Daniel A; Schellhardt, Lauren; Ee, Xueping; Snyder-Warwick, Alison K; Moore, Amy M; Mackinnon, Susan E; Wood, Matthew D

    2018-02-01

    Acellular nerve allografts (ANAs) yield less consistent favorable outcomes compared with autografts for long gap reconstructions. We evaluated whether a hybrid ANA can improve 6-cm gap reconstruction. Rat sciatic nerve was transected and repaired with either 6-cm hybrid or control ANAs. Hybrid ANAs were generated using a 1-cm cellular isograft between 2.5-cm ANAs, whereas control ANAs had no isograft. Outcomes were assessed by graft gene and marker expression (n = 4; at 4 weeks) and motor recovery and nerve histology (n = 10; at 20 weeks). Hybrid ANAs modified graft gene and marker expression and promoted modest axon regeneration across the 6-cm defect compared with control ANA (P < 0.05), but yielded no muscle recovery. Control ANAs had no appreciable axon regeneration across the 6-cm defect. A hybrid ANA confers minimal motor recovery benefits for regeneration across long gaps. Clinically, the authors will continue to reconstruct long nerve gaps with autografts. Muscle Nerve 57: 260-267, 2018. © 2017 Wiley Periodicals, Inc.

  13. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  14. Free radial forearm adiposo-fascial flap for inferior maxillectomy defect reconstruction

    PubMed Central

    Thankappan, Krishnakumar; Trivedi, Nirav P.; Sharma, Mohit; Kuriakose, Moni A.; Iyer, Subramania

    2009-01-01

    A free radial forearm fascial flap has been described for intraoral reconstruction. Adiposo-fascial flap harvesting involves few technical modifications from the conventional radial forearm fascio-cutaneous free flap harvesting. We report a case of inferior maxillectomy defect reconstruction in a 42-year-old male with a free radial forearm adiposo-fascial flap with good aesthetic and functional outcome with minimal primary and donor site morbidity. The technique of raising the flap and closing the donor site needs to be meticulous in order to achieve good cosmetic and functional outcome. PMID:19881028

  15. The Boomerang-shaped Pectoralis Major Musculocutaneous Flap for Reconstruction of Circular Defect of Cervical Skin.

    PubMed

    Azuma, Shuchi; Arikawa, Masaki; Miyamoto, Shimpei

    2017-11-01

    We report on a patient with a recurrence of oral cancer involving a cervical lymph node. The patient's postexcision cervical skin defect was nearly circular in shape, and the size was about 12 cm in diameter. The defect was successfully reconstructed with a boomerang-shaped pectoralis major musculocutaneous flap whose skin paddle included multiple intercostal perforators of the internal mammary vessels. This flap design is effective for reconstructing an extensive neck skin defect and enables primary closure of the donor site with minimal deformity.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  17. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  19. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  20. Postburn Head and Neck Reconstruction: An Algorithmic Approach.

    PubMed

    Heidekrueger, Paul Immanuel; Broer, Peter Niclas; Tanna, Neil; Ninkovic, Milomir

    2016-01-01

    Optimizing functional and aesthetic outcomes in postburn head and neck reconstruction remains a surgical challenge. Recurrent contractures, impaired range of motion, and disfigurement because of disruption of the aesthetic subunits of the face, can result in poor patient satisfaction and ultimately, contribute to social isolation of the patient. In an effort to improve the quality of life of these patients, this study evaluates different surgical approaches with an emphasis on tissue expansion of free and regional flaps. Regional and free-flap reconstruction was performed in 20 patients (26 flaps) with severe postburn head and neck contractures. To minimize donor site morbidity and obtain large amounts of thin and pliable tissue, pre-expansion was performed in all patients treated with locoregional flap reconstructions (12/12), and 62% (8/14) of patients with free-flap reconstructions. Algorithms regarding pre- and intraoperative decision-making are discussed, and complications between the techniques as well as long-term (mean follow-up 3 years) results are analyzed. Complications, including tissue expander infection with need for removal or exchange, partial or full flap loss, were evaluated and occurred in 25% (3/12) of patients with locoregional and 36% (5/14) of patients receiving free-flap reconstructions. Secondary revision surgery was performed in 33% (4/12) of locoregional flaps and 93% (13/14) of free flaps. Both locoregional as well as distant tissue transfers have their role in postburn head and neck reconstruction, whereas pre-expansion remains an invaluable tool. Paying attention to the presented principles and keeping the importance of aesthetic facial subunits in mind, range of motion, aesthetics, and patient satisfaction were improved long term in all our patients, while minimizing donor site morbidity.

  1. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE PAGES

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    2017-01-05

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  2. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  3. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.

  4. Experimental Definition and Validation of Protein Coding Transcripts in Chlamydomonas reinhardtii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kourosh Salehi-Ashtiani; Jason A. Papin

    Algal fuel sources promise unsurpassed yields in a carbon neutral manner that minimizes resource competition between agriculture and fuel crops. Many challenges must be addressed before algal biofuels can be accepted as a component of the fossil fuel replacement strategy. One significant challenge is that the cost of algal fuel production must become competitive with existing fuel alternatives. Algal biofuel production presents the opportunity to fine-tune microbial metabolic machinery for an optimal blend of biomass constituents and desired fuel molecules. Genome-scale model-driven algal metabolic design promises to facilitate both goals by directing the utilization of metabolites in the complex, interconnectedmore » metabolic networks to optimize production of the compounds of interest. Using Chlamydomonas reinhardtii as a model, we developed a systems-level methodology bridging metabolic network reconstruction with annotation and experimental verification of enzyme encoding open reading frames. We reconstructed a genome-scale metabolic network for this alga and devised a novel light-modeling approach that enables quantitative growth prediction for a given light source, resolving wavelength and photon flux. We experimentally verified transcripts accounted for in the network and physiologically validated model function through simulation and generation of new experimental growth data, providing high confidence in network contents and predictive applications. The network offers insight into algal metabolism and potential for genetic engineering and efficient light source design, a pioneering resource for studying light-driven metabolism and quantitative systems biology. Our approach to generate a predictive metabolic model integrated with cloned open reading frames, provides a cost-effective platform to generate metabolic engineering resources. While the generated resources are specific to algal systems, the approach that we have developed is not specific to algae and can be readily expanded to other microbial systems as well as higher plants and animals.« less

  5. 40 CFR Table 2b to Subpart Zzzz of... - Operating Limitations for New and Reconstructed 2SLB and Compression Ignition Stationary RICE...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Reconstructed 2SLB and Compression Ignition Stationary RICE >500 HP Located at a Major Source of HAP Emissions, New and Reconstructed 4SLB Stationary RICE â¥250 HP Located at a Major Source of HAP Emissions, Existing Compression Ignition Stationary RICE >500 HP, and Existing 4SLB Stationary RICE >500 HP Located at...

  6. Design and performance of a multi-pinhole collimation device for small animal imaging with clinical SPECT and SPECT-CT scanners

    PubMed Central

    DiFilippo, Frank P.

    2008-01-01

    A multi-pinhole collimation device is developed that uses the gamma camera detectors of a clinical SPECT or SPECT-CT scanner to produce high resolution SPECT images. The device consists of a rotating cylindrical collimator having 22 tungsten pinholes with 0.9 mm diameter apertures and an animal bed inside the collimator that moves linearly to provide helical or ordered-subsets axial sampling. CT images also may be acquired on a SPECT-CT scanner for purposes of image co-registration and SPECT attenuation correction. The device is placed on the patient table of the scanner without attaching to the detectors or scanner gantry. The system geometry is calibrated in-place from point source data and is then used during image reconstruction. The SPECT imaging performance of the device is evaluated with test phantom scans. Spatial resolution from reconstructed point source images is measured to be 0.6 mm full width at half maximum or better. Micro-Derenzo phantom images demonstrate the ability to resolve 0.7 mm diameter rod patterns. The axial slabs of a Micro-Defrise phantom are visualized well. Collimator efficiency exceeds 0.05% at the center of the field of view, and images of a uniform phantom show acceptable uniformity and minimal artifact. The overall simplicity and relatively good imaging performance of the device make it an interesting low-cost alternative to dedicated small animal scanners. PMID:18635899

  7. Design and performance of a multi-pinhole collimation device for small animal imaging with clinical SPECT and SPECT CT scanners

    NASA Astrophysics Data System (ADS)

    Di Filippo, Frank P.

    2008-08-01

    A multi-pinhole collimation device is developed that uses the gamma camera detectors of a clinical SPECT or SPECT-CT scanner to produce high-resolution SPECT images. The device consists of a rotating cylindrical collimator having 22 tungsten pinholes with 0.9 mm diameter apertures and an animal bed inside the collimator that moves linearly to provide helical or ordered-subsets axial sampling. CT images also may be acquired on a SPECT-CT scanner for purposes of image co-registration and SPECT attenuation correction. The device is placed on the patient table of the scanner without attaching to the detectors or scanner gantry. The system geometry is calibrated in-place from point source data and is then used during image reconstruction. The SPECT imaging performance of the device is evaluated with test phantom scans. Spatial resolution from reconstructed point source images is measured to be 0.6 mm full width at half maximum or better. Micro-Derenzo phantom images demonstrate the ability to resolve 0.7 mm diameter rod patterns. The axial slabs of a Micro-Defrise phantom are visualized well. Collimator efficiency exceeds 0.05% at the center of the field of view, and images of a uniform phantom show acceptable uniformity and minimal artifact. The overall simplicity and relatively good imaging performance of the device make it an interesting low-cost alternative to dedicated small animal scanners.

  8. 40 CFR 63.4283 - When do I have to comply with this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... initial startup of your new or reconstructed affected source is before May 29, 2003, the compliance date is May 29, 2003. (2) If the initial startup of your new or reconstructed affected source occurs after May 29, 2003, the compliance date is the date of initial startup of your affected source. (b) For an...

  9. 40 CFR 63.5425 - When must I start recordkeeping to determine my compliance ratio?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... this section: (1) If the startup of your new or reconstructed affected source is before February 27..., 2002. (2) If the startup of your new or reconstructed affected source is after February 27, 2002, then you must start recordkeeping to determine your compliance ratio upon startup of your affected source...

  10. 40 CFR 63.5425 - When must I start recordkeeping to determine my compliance ratio?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... this section: (1) If the startup of your new or reconstructed affected source is before February 27..., 2002. (2) If the startup of your new or reconstructed affected source is after February 27, 2002, then you must start recordkeeping to determine your compliance ratio upon startup of your affected source...

  11. 40 CFR 63.4283 - When do I have to comply with this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... initial startup of your new or reconstructed affected source is before May 29, 2003, the compliance date is May 29, 2003. (2) If the initial startup of your new or reconstructed affected source occurs after May 29, 2003, the compliance date is the date of initial startup of your affected source. (b) For an...

  12. 40 CFR 63.4283 - When do I have to comply with this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... initial startup of your new or reconstructed affected source is before May 29, 2003, the compliance date is May 29, 2003. (2) If the initial startup of your new or reconstructed affected source occurs after May 29, 2003, the compliance date is the date of initial startup of your affected source. (b) For an...

  13. Comparison Study of Three Different Image Reconstruction Algorithms for MAT-MI

    PubMed Central

    Xia, Rongmin; Li, Xu

    2010-01-01

    We report a theoretical study on magnetoacoustic tomography with magnetic induction (MAT-MI). According to the description of signal generation mechanism using Green’s function, the acoustic dipole model was proposed to describe acoustic source excited by the Lorentz force. Using Green’s function, three kinds of reconstruction algorithms based on different models of acoustic source (potential energy, vectored acoustic pressure, and divergence of Lorenz force) are deduced and compared, and corresponding numerical simulations were conducted to compare these three kinds of reconstruction algorithms. The computer simulation results indicate that the potential energy method and vectored pressure method can directly reconstruct the Lorentz force distribution and give a more accurate reconstruction of electrical conductivity. PMID:19846363

  14. The temporalis muscle flap and temporoparietal fascial flap.

    PubMed

    Lam, Din; Carlson, Eric R

    2014-08-01

    The temporal arterial system provides reliable vascular anatomy for the temporalis muscle flap and temporoparietal fascial flap that can support multiple reconstructive needs of the oral and maxillofacial region. The minimal donor site morbidity and ease of development of these flaps result in their predictable and successful transfer for reconstructive surgery of the oral and maxillofacial region. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Scrotal Reconstruction with Integra Following Necrotizing Fasciitis.

    PubMed

    Dent, Briar L; Dinesh, Anant; Khan, Khuram; Engdahl, Ryan

    2018-01-01

    Scrotal loss from Fournier's gangrene can be a devastating injury with esthetic and functional consequences. Local reconstructive options can be limited by the presence of infection or the loss of neighboring tissue from debridement. Integra TM bilayer matrix wound dressing is a well-established reconstructive modality, but only one report exists of its use in scrotal reconstruction and this was not in the setting of Fournier's gangrene. We report the successful use of Integra and a subsequent split-thickness skin graft for reconstruction of the anterior scrotum and coverage of the exposed testes in a 43-year-old man who developed Group A Streptococcus necrotizing fasciitis of his right lower extremity, groin, and scrotum requiring serial operative debridements. Stable testicular coverage was achieved with closely matched skin and minimal donor-site morbidity. Further study and a larger sample size will be necessary to better understand the advantages and disadvantages of scrotal reconstruction with Integra.

  16. Total variation iterative constraint algorithm for limited-angle tomographic reconstruction of non-piecewise-constant structures

    NASA Astrophysics Data System (ADS)

    Krauze, W.; Makowski, P.; Kujawińska, M.

    2015-06-01

    Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.

  17. Minimal Invasive Linea Alba Reconstruction for the Treatment of Umbilical and Epigastric Hernias with Coexisting Rectus Abdominis Diastasis.

    PubMed

    Köhler, Gernot; Fischer, Ines; Kaltenböck, Richard; Schrittwieser, Rudolf

    2018-04-05

    Patients with umbilical or epigastric hernias benefit from mesh- based repairs, and even more so if a concomitant rectus diastasis (RD) is present. The ideal technique is, however, still under debate. In this study we introduce the minimal invasive linea alba reconstruction (MILAR) with the supraaponeurotic placement of a fully absorbable synthetic mesh. Midline reconstruction with anterior rectus sheath repair and mesh augmentation by an open approach is a well-known surgical technique for ventral hernia repair. Between December 1, 2016, and November 30, 2017, 20 patients with symptomatic umbilical and/or epigastric hernias, and coexisting RD underwent a minimally invasive complete reconstruction of the midline through a small access route. The inner part of both incised and medialized anterior rectus sheaths was replaced by a fully absorbable synthetic mesh placed in a supraaponeurotic position. Patients were hospitalized for an average of 4 days and the mean operating time was 79 minutes. The mean hernia defect size was 1.5 cm in diameter and the mean mesh size was recorded as 15.8 cm in length and 5.2 cm in width. Two patients sustained surgical postoperative complications in terms of symptomatic seroma occurrences with successful interventional treatment.The early results (mean follow-up period of 5 months) showed no recurrences and only 1 patient reported occasional pain following exertion without rest. MILAR is a modification of the recently published endoscopic linea alba reconstruction restoring the normal anatomy of the abdominal wall. A new linea alba is formed with augmentation of autologous tissue consisting of the plicated anterior rectus sheaths. Supraaponeurotic placement of a fully absorbable synthetic mesh eliminates potential long-term mesh-associated complications. Regarding MILAR, there is no need for endoscopic equipment due to the uniquely designed flexible lighted retractors, meaning one assistant less is required.

  18. Iatrogenic deep musculocutaneous radiation injury following percutaneous coronary intervention.

    PubMed

    Monaco, JoAn L; Bowen, Kanika; Tadros, Peter N; Witt, Peter D

    2003-08-01

    Radiation-induced skin injury has been reported for multiple fluoroscopic procedures. Previous studies have indicated that prolonged fluoroscopic exposure during even a single percutaneous coronary intervention (PCI) may lead to cutaneous radiation injury. We document a novel case of deep muscle damage requiring wide local debridement and muscle flap reconstruction in a 59-year-old man with a large radiation-induced wound to the lower thoracic region following 1 prolonged PCI procedure. The deep muscular iatrogenic injury described in this report may be the source of significant morbidity. Recommendations to reduce radiation-induced damage include careful examination of the skin site before each procedure, minimized fluoroscopy time, utilization of pulse fluoroscopy, employment of radiation filters, and collimator s and rotation of the location of the image intensifier.

  19. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  20. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  1. Impact of aerosols and adverse atmospheric conditions on the data quality for spectral analysis of the H.E.S.S. telescopes

    NASA Astrophysics Data System (ADS)

    Hahn, J.; de los Reyes, R.; Bernlöhr, K.; Krüger, P.; Lo, Y. T. E.; Chadwick, P. M.; Daniel, M. K.; Deil, C.; Gast, H.; Kosack, K.; Marandon, V.

    2014-02-01

    The Earth's atmosphere is an integral part of the detector in ground-based imaging atmospheric Cherenkov telescope (IACT) experiments and has to be taken into account in the calibration. Atmospheric and hardware-related deviations from simulated conditions can result in the mis-reconstruction of primary particle energies and therefore of source spectra. During the eight years of observations with the High Energy Stereoscopic System (H.E.S.S.) in Namibia, the overall yield in Cherenkov photons has varied strongly with time due to gradual hardware aging, together with adjustments of the hardware components, and natural, as well as anthropogenic, variations of the atmospheric transparency. Here we present robust data selection criteria that minimize these effects over the full data set of the H.E.S.S. experiment and introduce the Cherenkov transparency coefficient as a new atmospheric monitoring quantity. The influence of atmospheric transparency, as quantified by this coefficient, on energy reconstruction and spectral parameters is examined and its correlation with the aerosol optical depth (AOD) of independent MISR satellite measurements and local measurements of atmospheric clarity is investigated.

  2. Pollen and spores of terrestrial plants

    USGS Publications Warehouse

    Bernhardt, Christopher E.; Willard, Debra A.; Shennan, Ian; Long, Antony J.; Horton, Benjamin P.

    2015-01-01

    Pollen and spores are valuable tools in reconstructing past sea level and climate because of their ubiquity, abundance, and durability as well as their reciprocity with source vegetation to environmental change (Cronin, 1999; Traverse, 2007; Willard and Bernhardt, 2011). Pollan is found in many sedimentary environments, from freshwater to saltwater, terrestrial to marine. It can be abundant in a minimal amount of sample material, for example half a gram, as concentrations can be as high as four million grains per gram (Traverse, 2007). The abundance of pollen in a sample lends it to robust statistical analysis for the quantitative reconstruction of environments. The outer cell wall is resistant to decay in sediments and allows palynomorphs (pollen and spores) to record changes in plant communities and sea level over millions of years. These characteristics make pollen and spores a powerful tool to use in sea-level research.This chapter describes the biology of pollen and spores and how they are transported and preserved in sediments. We present a methodology for isolating pollen from sediments and a general language and framework to identify pollen as well as light micrographs of a selection of common pollen grains, We then discuss their utility in sea-level research.

  3. Reconstruction de defauts a partir de donnees issues de capteurs a courants de foucault avec modele direct differentiel

    NASA Astrophysics Data System (ADS)

    Trillon, Adrien

    Eddy current tomography can be employed to caracterize flaws in metal plates in steam generators of nuclear power plants. Our goal is to evaluate a map of the relative conductivity that represents the flaw. This nonlinear ill-posed problem is difficult to solve and a forward model is needed. First, we studied existing forward models to chose the one that is the most adapted to our case. Finite difference and finite element methods matched very good to our application. We adapted contrast source inversion (CSI) type methods to the chosen model and a new criterion was proposed. These methods are based on the minimization of the weighted errors of the model equations, coupling and observation. They allow an error on the equations. It appeared that reconstruction quality grows with the decay of the error on the coupling equation. We resorted to augmented Lagrangian techniques to constrain coupling equation and to avoid conditioning problems. In order to overcome the ill-posed character of the problem, prior information was introduced about the shape of the flaw and the values of the relative conductivity. Efficiency of the methods are illustrated with simulated flaws in 2D case.

  4. Investigating the performance of reconstruction methods used in structured illumination microscopy as a function of the illumination pattern's modulation frequency

    NASA Astrophysics Data System (ADS)

    Shabani, H.; Sánchez-Ortiga, E.; Preza, C.

    2016-03-01

    Surpassing the resolution of optical microscopy defined by the Abbe diffraction limit, while simultaneously achieving optical sectioning, is a challenging problem particularly for live cell imaging of thick samples. Among a few developing techniques, structured illumination microscopy (SIM) addresses this challenge by imposing higher frequency information into the observable frequency band confined by the optical transfer function (OTF) of a conventional microscope either doubling the spatial resolution or filling the missing cone based on the spatial frequency of the pattern when the patterned illumination is two-dimensional. Standard reconstruction methods for SIM decompose the low and high frequency components from the recorded low-resolution images and then combine them to reach a high-resolution image. In contrast, model-based approaches rely on iterative optimization approaches to minimize the error between estimated and forward images. In this paper, we study the performance of both groups of methods by simulating fluorescence microscopy images from different type of objects (ranging from simulated two-point sources to extended objects). These simulations are used to investigate the methods' effectiveness on restoring objects with various types of power spectrum when modulation frequency of the patterned illumination is changing from zero to the incoherent cut-off frequency of the imaging system. Our results show that increasing the amount of imposed information by using a higher modulation frequency of the illumination pattern does not always yield a better restoration performance, which was found to be depended on the underlying object. Results from model-based restoration show performance improvement, quantified by an up to 62% drop in the mean square error compared to standard reconstruction, with increasing modulation frequency. However, we found cases for which results obtained with standard reconstruction methods do not always follow the same trend.

  5. Development and Operation of a High Resolution Positron Emission Tomography System to Perform Metabolic Studies on Small Animals.

    NASA Astrophysics Data System (ADS)

    Hogan, Matthew John

    A positron emission tomography system designed to perform high resolution imaging of small volumes has been characterized. Two large area planar detectors, used to detect the annihilation gamma rays, formed a large aperture stationary positron camera. The detectors were multiwire proportional chambers coupled to high density lead stack converters. Detector efficiency was 8%. The coincidence resolving time was 500 nsec. The maximum system sensitivity was 60 cps/(mu)Ci for a solid angle of acceptance of 0.74(pi) St. The maximum useful coincidence count rate was 1500 cps and was limited by electronic dead time. Image reconstruction was done by performing a 3-dimensional deconvolution using Fourier transform methods. Noise propagation during reconstruction was minimized by choosing a 'minimum norm' reconstructed image. In the stationary detector system (with a limited angle of acceptance for coincident events) statistical uncertainty in the data limited reconstruction in the direction normal to the detector surfaces. Data from a rotated phantom showed that detector rotation will correct this problem. Resolution was 4 mm in planes parallel to the detectors and (TURN)15 mm in the normal direction. Compton scattering of gamma rays within a source distribution was investigated using both simulated and measured data. Attenuation due to scatter was as high as 60%. For small volume imaging the Compton background was identified and an approximate correction was performed. A semiquantitative blood flow measurement to bone in the leg of a cat using the ('18)F('-) ion was performed. The results were comparable to investigations using more conventional techniques. Qualitative scans using ('18)F labelled deoxy -D-glucose to assess brain glucose metabolism in a rhesus monkey were also performed.

  6. Posterior trunk reconstruction with the dorsal intercostal artery perforator based flap: Clinical experience on 20 consecutive oncological cases.

    PubMed

    Brunetti, Beniamino; Tenna, Stefania; Aveta, Achille; Poccia, Igor; Segreto, Francesco; Cerbone, Vincenzo; Persichetti, Paolo

    2016-10-01

    Few studies in the recent literature have investigated the reliability of dorsal intercostal artery perforator (DICAP) flap in posterior trunk reconstruction. The purpose of this report is to describe our clinical experience with the use of DICAP flaps in a cohort of oncological patients. Twenty patients underwent posterior trunk reconstruction with DICAP based flaps. Patients age ranged from 45 to 76 years. All defects resulted from skin cancer ablation. Defect sizes ranged from 4 × 4 to 6 × 8 cm. The flaps were mobilized in V-Y or propeller fashion. The flaps were islanded on 1 (12 cases), 2 (6 cases), or 3 (2 cases) perforators. Donor sites were always closed primarily. Eleven V-Y advancement flaps were performed; one of these was converted to a perforator-plus peninsular flap design, which retained an additional source of blood supply from the opposite skin bridge. Nine flaps were mobilized in propeller fashion. Flap dimensions ranged from 4 × 6 to 6 × 14 cm. Mean operative time was 70 min. One V-Y flap complicated with marginal necrosis that healed with no need for reintervention. All the other flaps survived uneventfully. No other complications were observed at recipient and donor sites. Follow-up ranged from 3 months to 2 years. All the patients were satisfied with the surgical outcome. DICAP based flaps proved to be a reliable option to resurface posterior trunk defects following oncological resection, allowing to achieve like-with-like reconstruction with excellent contour and minimal donor-site morbidity. © 2015 Wiley Periodicals, Inc. Microsurgery 36:546-551, 2016. © 2015 Wiley Periodicals, Inc.

  7. SU-D-201-05: Phantom Study to Determine Optimal PET Reconstruction Parameters for PET/MR Imaging of Y-90 Microspheres Following Radioembolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maughan, N; Conti, M; Parikh, P

    2015-06-15

    Purpose: Imaging Y-90 microspheres with PET/MRI following hepatic radioembolization has the potential for predicting treatment outcome and, in turn, improving patient care. The positron decay branching ratio, however, is very small (32 ppm), yielding images with poor statistics even when therapy doses are used. Our purpose is to find PET reconstruction parameters that maximize the PET recovery coefficients and minimize noise. Methods: An initial 7.5 GBq of Y-90 chloride solution was used to fill an ACR phantom for measurements with a PET/MRI scanner (Siemens Biograph mMR). Four hot cylinders and a warm background activity volume of the phantom were filledmore » with a 10:1 ratio. Phantom attenuation maps were derived from scaled CT images of the phantom and included the MR phased array coil. The phantom was imaged at six time points between 7.5–1.0 GBq total activity over a period of eight days. PET images were reconstructed via OP-OSEM with 21 subsets and varying iteration number (1–5), post-reconstruction filter size (5–10 mm), and either absolute or relative scatter correction. Recovery coefficients, SNR, and noise were measured as well as total activity in the phantom. Results: For the 120 different reconstructions, recovery coefficients ranged from 0.1–0.6 and improved with increasing iteration number and reduced post-reconstruction filter size. SNR, however, improved substantially with lower iteration numbers and larger post-reconstruction filters. From the phantom data, we found that performing 2 iterations, 21 subsets, and applying a 5 mm Gaussian post-reconstruction filter provided optimal recovery coefficients at a moderate noise level for a wide range of activity levels. Conclusion: The choice of reconstruction parameters for Y-90 PET images greatly influences both the accuracy of measurements and image quality. We have found reconstruction parameters that provide optimal recovery coefficients with minimized noise. Future work will include the effects of the body matrix coil and off-center measurements.« less

  8. SU-E-J-246: A Deformation-Field Map Based Liver 4D CBCT Reconstruction Method Using Gold Nanoparticles as Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, W; Zhang, Y; Ren, L

    2014-06-01

    Purpose: To investigate the feasibility of using nanoparticle markers to validate liver tumor motion together with a deformation field map-based four dimensional (4D) cone-beam computed tomography (CBCT) reconstruction method. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In this method, each phase of the 4D-CBCT is considered as a deformation of a prior CT volume. The DFM is solved by a motion modeling and free-form deformation (MM-FD) technique, using a data fidelity constraint and the deformation energy minimization. For liver imaging, there is low contrast of a liver tumor inmore » on-board projections. A validation of liver tumor motion using implanted gold nanoparticles, along with the MM-FD deformation technique is implemented to reconstruct onboard 4D CBCT liver radiotherapy images. These nanoparticles were placed around the liver tumor to reflect the tumor positions in both CT simulation and on-board image acquisition. When reconstructing each phase of the 4D-CBCT, the migrations of the gold nanoparticles act as a constraint to regularize the deformation field, along with the data fidelity and the energy minimization constraints. In this study, multiple tumor diameters and positions were simulated within the liver for on-board 4D-CBCT imaging. The on-board 4D-CBCT reconstructed by the proposed method was compared with the “ground truth” image. Results: The preliminary data, which uses reconstruction for lung radiotherapy suggests that the advanced reconstruction algorithm including the gold nanoparticle constraint will Resultin volume percentage differences (VPD) between lesions in reconstructed images by MM-FD and “ground truth” on-board images of 11.5% (± 9.4%) and a center of mass shift of 1.3 mm (± 1.3 mm) for liver radiotherapy. Conclusion: The advanced MM-FD technique enforcing the additional constraints from gold nanoparticles, results in improved accuracy for reconstructing on-board 4D-CBCT of liver tumor. Varian medical systems research grant.« less

  9. 40 CFR Table 2b to Subpart Zzzz of... - Operating Limitations for New and Reconstructed 2SLB and Compression Ignition Stationary RICE...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Reconstructed 2SLB and Compression Ignition Stationary RICE >500 HP Located at a Major Source of HAP Emissions, Existing Non-Emergency Compression Ignition Stationary RICE >500 HP, and New and Reconstructed 4SLB Burn Stationary RICE â¥250 HP Located at a Major Source of HAP Emissions 2b Table 2b to Subpart ZZZZ of Part 63...

  10. 40 CFR Table 1b to Subpart Zzzz of... - Operating Limitations for Existing, New, and Reconstructed SI 4SRB Stationary RICE >500 HP...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., New, and Reconstructed SI 4SRB Stationary RICE >500 HP Located at a Major Source of HAP Emissions 1b... Limitations for Existing, New, and Reconstructed SI 4SRB Stationary RICE >500 HP Located at a Major Source of... 15 percent O2 and using NSCR; a. maintain your catalyst so that the pressure drop across the catalyst...

  11. 40 CFR Table 1b to Subpart Zzzz of... - Operating Limitations for Existing, New, and Reconstructed SI 4SRB Stationary RICE >500 HP...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., New, and Reconstructed SI 4SRB Stationary RICE >500 HP Located at a Major Source of HAP Emissions 1b... Limitations for Existing, New, and Reconstructed SI 4SRB Stationary RICE >500 HP Located at a Major Source of... 15 percent O2 and using NSCR; a. maintain your catalyst so that the pressure drop across the catalyst...

  12. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  13. Heat source reconstruction from noisy temperature fields using a gradient anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Beitone, C.; Balandraud, X.; Delpueyo, D.; Grédiac, M.

    2017-01-01

    This paper presents a post-processing technique for noisy temperature maps based on a gradient anisotropic diffusion (GAD) filter in the context of heat source reconstruction. The aim is to reconstruct heat source maps from temperature maps measured using infrared (IR) thermography. Synthetic temperature fields corrupted by added noise are first considered. The GAD filter, which relies on a diffusion process, is optimized to retrieve as well as possible a heat source concentration in a two-dimensional plate. The influence of the dimensions and the intensity of the heat source concentration are discussed. The results obtained are also compared with two other types of filters: averaging filter and Gaussian derivative filter. The second part of this study presents an application for experimental temperature maps measured with an IR camera. The results demonstrate the relevancy of the GAD filter in extracting heat sources from noisy temperature fields.

  14. WE-G-207-04: Non-Local Total-Variation (NLTV) Combined with Reweighted L1-Norm for Compressed Sensing Based CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less

  15. Atmospheric histories of halocarbons from analysis of Antarctic firn air: Methyl bromide, methyl chloride, chloroform, and dichloromethane

    NASA Astrophysics Data System (ADS)

    Trudinger, C. M.; Etheridge, D. M.; Sturrock, G. A.; Fraser, P. J.; Krummel, P. B.; McCulloch, A.

    2004-11-01

    We reconstruct atmospheric levels of methyl bromide (CH3Br), methyl chloride (CH3Cl), chloroform (CHCl3), and dichloromethane (CH2Cl2) back to before 1940 using measurements of air extracted from firn on Law Dome in Antarctica. The firn air at this site has a relatively narrow age spread, giving high time resolution reconstructions. The CH3Br reconstructions confirm previously measured firn records but with more temporal structure. Our CH3Cl reconstruction is slightly different from previous reconstructions, raising some questions about CH3Cl in the firn. Our reconstructions for CHCl3 and CH2Cl2 are the first published records of concentration prior to direct atmospheric measurements. A two-box atmospheric model is used to investigate the budgets of these gases. Much of the variation in CH3Cl can be explained by biomass burning emissions that increase up to 1980 and then are relatively stable apart from some high burning years such as 1997-1998. The CHCl3 firn reconstruction suggests that the anthropogenic source for CHCl3 is greater than previously thought, with human influence on the soil source a possible important contributor here. The CH2Cl2 firn reconstruction is consistent with industrial emission estimates based on audited sales data but suggests that the ocean source of CH2Cl2 is less than previously estimated.

  16. The Boomerang-shaped Pectoralis Major Musculocutaneous Flap for Reconstruction of Circular Defect of Cervical Skin

    PubMed Central

    Azuma, Shuchi; Arikawa, Masaki

    2017-01-01

    Summary: We report on a patient with a recurrence of oral cancer involving a cervical lymph node. The patient’s postexcision cervical skin defect was nearly circular in shape, and the size was about 12 cm in diameter. The defect was successfully reconstructed with a boomerang-shaped pectoralis major musculocutaneous flap whose skin paddle included multiple intercostal perforators of the internal mammary vessels. This flap design is effective for reconstructing an extensive neck skin defect and enables primary closure of the donor site with minimal deformity. PMID:29263975

  17. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  18. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  19. Photogrammetric Analysis of Historical Image Repositories for Virtual Reconstruction in the Field of Digital Humanities

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Vietze, T.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2017-02-01

    Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi-) automated photogrammetric reconstruction methods for heterogeneous collections of historical (city) photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate") of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM). Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.

  20. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    PubMed

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  1. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  2. Restoration of frontal contour with methyl methacrylate.

    PubMed

    Schultz, R C

    1979-10-01

    Of the various materials currently available for reconstruction of bony frontal deformities, bone cement (methyl methacrylate) has been judged to be superior in its simplicity, reliability, and aesthetic potential. It is uniquely suited to reconstruction of irregular defects of the forehead. Its biological characteristics, advantages, and hazards are presented along with the techniques of its use. Clinical examples illustrate the results obtained with minimal preparation, surgical time, and morbidity.

  3. Digital tomosynthesis (DTS) with a Circular X-ray tube: Its image reconstruction based on total-variation minimization and the image characteristics

    NASA Astrophysics Data System (ADS)

    Park, Y. O.; Hong, D. K.; Cho, H. S.; Je, U. K.; Oh, J. E.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Jang, W. S.; Cho, H. M.; Choi, S. I.; Koo, Y. S.

    2013-09-01

    In this paper, we introduce an effective imaging system for digital tomosynthesis (DTS) with a circular X-ray tube, the so-called circular-DTS (CDTS) system, and its image reconstruction algorithm based on the total-variation (TV) minimization method for low-dose, high-accuracy X-ray imaging. Here, the X-ray tube is equipped with a series of cathodes distributed around a rotating anode, and the detector remains stationary throughout the image acquisition. We considered a TV-based reconstruction algorithm that exploited the sparsity of the image with substantially high image accuracy. We implemented the algorithm for the CDTS geometry and successfully reconstructed images of high accuracy. The image characteristics were investigated quantitatively by using some figures of merit, including the universal-quality index (UQI) and the depth resolution. For selected tomographic angles of 20, 40, and 60°, the corresponding UQI values in the tomographic view were estimated to be about 0.94, 0.97, and 0.98, and the depth resolutions were about 4.6, 3.1, and 1.2 voxels in full width at half maximum (FWHM), respectively. We expect the proposed method to be applicable to developing a next-generation dental or breast X-ray imaging system.

  4. Application of band-target entropy minimization to infrared emission spectroscopy and the reconstruction of pure component emissivities from thin films and liquid samples.

    PubMed

    Cheng, Shuying; Rajarathnam, D; Meiling, Tan; Garland, Marc

    2006-05-01

    Thermal emission spectral data sets were collected for a thin solid film (parafilm) and a thin liquid film (isopropanol) on the interval of 298-348 K. The measurements were performed using a conventional Fourier transform infrared (FT-IR) spectrometer with external optical bench and in-house-designed emission cell. Both DTGS and MCT detectors were used. The data sets were analyzed with band-target entropy minimization (BTEM), which is a pure component spectral reconstruction program. Pure component emissivities of the parafilm, isopropanol, and thermal background were all recovered without any a priori information. Furthermore, the emissivities were obtained with increased signal-to-noise ratios, and the signals due to absorbance of thermal radiation by gas-phase moisture and CO2 were significantly reduced. As expected, the MCT results displayed better signal-to-noise ratios than the DTGS results, but the latter results were still rather impressive given the low temperatures used in this study. Comparison is made with spectral reconstruction using the orthogonal projection approach-alternating least squares (OPA-ALS) technique. This contribution introduces the primary equation for emission spectral reconstruction using BTEM and discusses some of the unusual characteristics of thermal emission and their impact on the analysis.

  5. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman

    2017-02-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.

  6. Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment

    NASA Astrophysics Data System (ADS)

    Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.

    2007-05-01

    A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.

  7. Recovering TMS-evoked EEG responses masked by muscle artifacts.

    PubMed

    Mutanen, Tuomas P; Kukkonen, Matleena; Nieminen, Jaakko O; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2016-10-01

    Combined transcranial magnetic stimulation (TMS) and electroencephalography (EEG) often suffers from large muscle artifacts. Muscle artifacts can be removed using signal-space projection (SSP), but this can make the visual interpretation of the remaining EEG data difficult. We suggest to use an additional step after SSP that we call source-informed reconstruction (SIR). SSP-SIR improves substantially the signal quality of artifactual TMS-EEG data, causing minimal distortion in the neuronal signal components. In the SSP-SIR approach, we first project out the muscle artifact using SSP. Utilizing an anatomical model and the remaining signal, we estimate an equivalent source distribution in the brain. Finally, we map the obtained source estimate onto the original signal space, again using anatomical information. This approach restores the neuronal signals in the sensor space and interpolates EEG traces onto the completely rejected channels. The introduced algorithm efficiently suppresses TMS-related muscle artifacts in EEG while retaining well the neuronal EEG topographies and signals. With the presented method, we can remove muscle artifacts from TMS-EEG data and recover the underlying brain responses without compromising the readability of the signals of interest. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Outcome of a graduated minimally invasive facial reanimation in patients with facial paralysis.

    PubMed

    Holtmann, Laura C; Eckstein, Anja; Stähr, Kerstin; Xing, Minzhi; Lang, Stephan; Mattheis, Stefan

    2017-08-01

    Peripheral paralysis of the facial nerve is the most frequent of all cranial nerve disorders. Despite advances in facial surgery, the functional and aesthetic reconstruction of a paralyzed face remains a challenge. Graduated minimally invasive facial reanimation is based on a modular principle. According to the patients' needs, precondition, and expectations, the following modules can be performed: temporalis muscle transposition and facelift, nasal valve suspension, endoscopic brow lift, and eyelid reconstruction. Applying a concept of a graduated minimally invasive facial reanimation may help minimize surgical trauma and reduce morbidity. Twenty patients underwent a graduated minimally invasive facial reanimation. A retrospective chart review was performed with a follow-up examination between 1 and 8 months after surgery. The FACEgram software was used to calculate pre- and postoperative eyelid closure, the level of brows, nasal, and philtral symmetry as well as oral commissure position at rest and oral commissure excursion with smile. As a patient-oriented outcome parameter, the Glasgow Benefit Inventory questionnaire was applied. There was a statistically significant improvement in the postoperative score of eyelid closure, brow asymmetry, nasal asymmetry, philtral asymmetry as well as oral commissure symmetry at rest (p < 0.05). Smile evaluation revealed no significant change of oral commissure excursion. The mean Glasgow Benefit Inventory score indicated substantial improvement in patients' overall quality of life. If a primary facial nerve repair or microneurovascular tissue transfer cannot be applied, graduated minimally invasive facial reanimation is a promising option to restore facial function and symmetry at rest.

  9. Increasing reconstruction quality of diffractive optical elements displayed with LC SLM

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.

    2015-03-01

    Phase liquid crystal (LC) spatial light modulators (SLM) are actively used in various applications. However, majority of scientific applications require stable phase modulation which might be hard to achieve with commercially available SLM due to its consumer origin. The use of digital voltage addressing scheme leads to phase temporal fluctuations, which results in lower diffraction efficiency and reconstruction quality of displayed diffractive optical elements (DOE). Due to high periodicity of fluctuations it should be possible to use knowledge of these fluctuations during DOE synthesis to minimize negative effect. We synthesized DOE using accurately measured phase fluctuations of phase LC SLM "HoloEye PLUTO VIS" to minimize its negative impact on displayed DOE reconstruction. Synthesis was conducted with versatile direct search with random trajectory (DSRT) method in the following way. Before DOE synthesis begun, two-dimensional dependency of SLM phase shift on addressed signal level and time from frame start was obtained. Then synthesis begins. First, initial phase distribution is created. Second, random trajectory of consecutive processing of all DOE elements is generated. Then iterative process begins. Each DOE element sequentially has its value changed to one that provides better value of objective criterion, e.g. lower deviation of reconstructed image from original one. If current element value provides best objective criterion value then it left unchanged. After all elements are processed, iteration repeats until stagnation is reached. It is demonstrated that application of SLM phase fluctuations knowledge in DOE synthesis with DSRT method leads to noticeable increase of DOE reconstruction quality.

  10. Tomographic reconstruction of ionospheric electron density during the storm of 5-6 August 2011 using multi-source data

    PubMed Central

    Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian

    2015-01-01

    The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone. PMID:26266764

  11. SESAME: a software tool for the numerical dosimetric reconstruction of radiological accidents involving external sources and its application to the accident in Chile in December 2005.

    PubMed

    Huet, C; Lemosquet, A; Clairand, I; Rioual, J B; Franck, D; de Carlan, L; Aubineau-Lanièce, I; Bottollier-Depois, J F

    2009-01-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. This dose distribution can be assessed by physical dosimetric reconstruction methods. Physical dosimetric reconstruction can be achieved using experimental or numerical techniques. This article presents the laboratory-developed SESAME--Simulation of External Source Accident with MEdical images--tool specific to dosimetric reconstruction of radiological accidents through numerical simulations which combine voxel geometry and the radiation-material interaction MCNP(X) Monte Carlo computer code. The experimental validation of the tool using a photon field and its application to a radiological accident in Chile in December 2005 are also described.

  12. Tomographic reconstruction of ionospheric electron density during the storm of 5-6 August 2011 using multi-source data.

    PubMed

    Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian

    2015-08-12

    The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone.

  13. Accurate sparse-projection image reconstruction via nonlocal TV regularization.

    PubMed

    Zhang, Yi; Zhang, Weihua; Zhou, Jiliu

    2014-01-01

    Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.

  14. Tracheal reconstruction with autogenous jejunal microsurgical transfer.

    PubMed

    Jones, R E; Morgan, R F; Marcella, K L; Mills, S E; Kron, I L

    1986-06-01

    Tracheal defects due to stricture formation, tracheomalacia, and neoplasms can present difficult reconstructive problems. Tracheal defects were surgically created in 6 dogs and primarily reconstructed with microsurgical free tissue transfer of autogenous jejunal segments. Primary healing was accomplished in all dogs without severe air leakage or infection. Bronchoscopy demonstrated no substantial secretions or tracheal narrowing. Gross pathological examination of the trachea revealed no evidence of tracheal disruption or infection. Direct measurements revealed no major tracheal narrowing. Microscopic examination demonstrated normal jejunal mucosa with a minimal amount of inflammatory change at the margins of the reconstruction at 6 weeks. Microvascular free tissue transfer of jejunal segments to correct cervical tracheal defects can readily be accomplished with excellent healing and maintenance of the tracheal lumen in dogs.

  15. 40 CFR Table 5 to Subpart Zzzz of... - Initial Compliance With Emission Limitations and Operating Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... demonstrated initial compliance if . . . 1. New or reconstructed non-emergency 2SLB stationary RICE >500 HP located at a major source of HAP, new or reconstructed non-emergency 4SLB stationary RICE ≥250 HP located at a major source of HAP, non-emergency stationary CI RICE >500 HP located at a major source of HAP...

  16. Bayesian probabilistic approach for inverse source determination from limited and noisy chemical or biological sensor concentration measurements

    NASA Astrophysics Data System (ADS)

    Yee, Eugene

    2007-04-01

    Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.

  17. [Minimally invasive reconstruction of the posterolateral corner with simultaneous replacement of the anterior cruciate ligament for complex knee ligament injuries].

    PubMed

    Vega-España, E A; Vilchis-Sámano, H; Ruiz-Mejía, O

    2017-01-01

    To evaluate and describe the results of a simultaneous reconstruction with minimally invasive technique of the posterolateral complex (PLC) and the anterior cruciate ligament (ACL). ACL and PLC reconstruction was performed in seven patients using the technique described, in the period from March to November 2012. All patients were evaluated at six months after the procedure using IKDC and IKSS subjective tests. Their return to work activities and their level of satisfaction were assessed. Six male and one female patients ranging in age between 26 and 46 years were evaluated. The injuries were mostly caused by sports related accidents. All patients were economically active and required an average period of three months of disability. The assessment and outcomes at six months, according to the IKDC scale, were: one patient with IKDC A, four with IKDC B, one patient with C, and one with D. In the subjective scale IKSS, 80% averaged a knee stability of over 90 points, a patient had a 100-point scale and another, of 70 points.

  18. Use of a Bioactive Scaffold to Stimulate ACL Healing Also Minimizes Post-traumatic Osteoarthritis after Surgery

    PubMed Central

    Murray, Martha M.; Fleming, Braden C.

    2013-01-01

    Background While ACL reconstruction is the treatment gold standard for ACL injury, it does not reduce the risk of post-traumatic osteoarthritis. Therefore, new treatments that minimize this postoperative complication are of interest. Bio-enhanced ACL repair, in which a bioactive scaffold is used to stimulate healing of an ACL transection, has shown considerable promise in short term studies. The long-term results of this technique and the effects of the bio-enhancement on the articular cartilage have not been previously evaluated in a large animal model. Hypothesis 1) The structural (tensile) properties of the porcine ACL at 6 and 12 months after injury are similar when treated with bio-enhanced ACL repair, bio-enhanced ACL reconstruction, or conventional ACL reconstruction, and all treatments yield results superior to untreated ACL transection. 2) After one year, macroscopic cartilage damage following bio-enhanced ACL repair is similar to bio-enhanced ACL reconstruction and less than conventional ACL reconstruction and untreated ACL transection. Study Design Controlled laboratory study (porcine model) Methods Sixty-two Yucatan mini-pigs underwent ACL transection and randomization to four experimental groups: 1) no treatment, 2) conventional ACL reconstruction, 3) “bio-enhanced” ACL reconstruction using a bioactive scaffold, and 4) “bio-enhanced” ACL repair using a bioactive scaffold. The biomechanical properties of the ligament or graft and macroscopic assessments of the cartilage surfaces were performed after 6 and 12 months of healing. Results The structural properties (i.e., linear stiffness, yield and maximum loads) of the ligament following bio-enhanced ACL repair were not significantly different from bio-enhanced ACL reconstruction or conventional ACL reconstruction, but were significantly greater than untreated ACL transection after 12 months of healing. Macroscopic cartilage damage after bio-enhanced ACL repair was significantly less than untreated ACL transection and bio-enhanced ACL reconstruction, and there was a strong trend (p=.068) that it was less than conventional ACL reconstruction in the porcine model at 12 months. Conclusions Bio-enhanced ACL repair produces a ligament that is biomechanically similar to an ACL graft and provides chondroprotection to the joint following ACL surgery. Clinical Relevance Bio-enhanced ACL repair may provide a new less invasive treatment option that reduces cartilage damage following joint injury. PMID:23857883

  19. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.

  20. Minimally Invasive Component Separation Results in Fewer Wound-Healing Complications than Open Component Separation for Large Ventral Hernia Repairs

    PubMed Central

    Ghali, Shadi; Turza, Kristin C; Baumann, Donald P; Butler, Charles E

    2014-01-01

    BACKGROUND Minimally invasive component separation (CS) with inlay bioprosthetic mesh (MICSIB) is a recently developed technique for abdominal wall reconstruction that preserves the rectus abdominis perforators and minimizes subcutaneous dead space using limited-access tunneled incisions. We hypothesized that MICSIB would result in better surgical outcomes than would conventional open CS. STUDY DESIGN All consecutive patients who underwent CS (open or minimally invasive) with inlay bioprosthetic mesh for ventral hernia repair from 2005 to 2010 were included in a retrospective analysis of prospectively collected data. Surgical outcomes including wound-healing complications, hernia recurrences, and abdominal bulge/laxity rates were compared between patient groups based on the type of CS repair: MICSIB or open. RESULTS Fifty-seven patients who underwent MICSIB and 50 who underwent open CS were included. The mean follow-ups were 15.2±7.7 months and 20.7±14.3 months, respectively. The mean fascial defect size was significantly larger in the MICSIB group (405.4±193.6 cm2 vs. 273.8±186.8 cm2; p =0.002). The incidences of skin dehiscence (11% vs. 28%; p=0.011), all wound-healing complications (14% vs. 32%; p=0.026), abdominal wall laxity/bulge (4% vs. 14%; p=0.056), and hernia recurrence (4% vs. 8%; p=0.3) were lower in the MICSIB group than in the open CS group. CONCLUSIONS MICSIB resulted in fewer wound-healing complications than did open CS used for complex abdominal wall reconstructions. These findings are likely attributable to the preservation of paramedian skin vascularity and reduction in subcutaneous dead space with MICSIB. MICSIB should be considered for complex abdominal wall reconstructions, particularly in patients at increased risk of wound-healing complications. PMID:22521439

  1. The chimeric mapping problem: algorithmic strategies and performance evaluation on synthetic genomic data.

    PubMed

    Greenberg, D; Istrail, S

    1994-09-01

    The Human Genome Project requires better software for the creation of physical maps of chromosomes. Current mapping techniques involve breaking large segments of DNA into smaller, more-manageable pieces, gathering information on all the small pieces, and then constructing a map of the original large piece from the information about the small pieces. Unfortunately, in the process of breaking up the DNA some information is lost and noise of various types is introduced; in particular, the order of the pieces is not preserved. Thus, the map maker must solve a combinatorial problem in order to reconstruct the map. Good software is indispensable for quick, accurate reconstruction. The reconstruction is complicated by various experimental errors. A major source of difficulty--which seems to be inherent to the recombination technology--is the presence of chimeric DNA clones. It is fairly common for two disjoint DNA pieces to form a chimera, i.e., a fusion of two pieces which appears as a single piece. Attempts to order chimera will fail unless they are algorithmically divided into their constituent pieces. Despite consensus within the genomic mapping community of the critical importance of correcting chimerism, algorithms for solving the chimeric clone problem have received only passing attention in the literature. Based on a model proposed by Lander (1992a, b) this paper presents the first algorithms for analyzing chimerism. We construct physical maps in the presence of chimerism by creating optimization functions which have minimizations which correlate with map quality. Despite the fact that these optimization functions are invariably NP-complete our algorithms are guaranteed to produce solutions which are close to the optimum. The practical import of using these algorithms depends on the strength of the correlation of the function to the map quality as well as on the accuracy of the approximations. We employ two fundamentally different optimization functions as a means of avoiding biases likely to decorrelate the solutions from the desired map. Experiments on simulated data show that both our algorithm which minimizes the number of chimeric fragments in a solution and our algorithm which minimizes the maximum number of fragments per clone in a solution do, in fact, correlate to high quality solutions. Furthermore, tests on simulated data using parameters set to mimic real experiments show that that the algorithms have the potential to find high quality solutions with real data. We plan to test our software against real data from the Whitehead Institute and from Los Alamos Genomic Research Center in the near future.

  2. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  3. 40 CFR 63.1346 - Standards for new or reconstructed raw material dryers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Standards for new or reconstructed raw... Industry Emission Standards and Operating Limits § 63.1346 Standards for new or reconstructed raw material dryers. (a) New or reconstructed raw material dryers located at facilities that are major sources can not...

  4. Using Local History, Primary Source Material, and Comparative History to Teach Reconstruction.

    ERIC Educational Resources Information Center

    Adomanis, James F.

    1989-01-01

    Suggests using local history, primary source material, and comparative history to alleviate the boredom most students experience when studying the Reconstruction period of U.S. history. Provides an example of comparative history usage through a discussion of ante-bellum Maryland and the history of Liberia. (KO)

  5. 40 CFR Table 4 to Subpart Eeee of... - Work Practice Standards

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutants: Organic Liquids Distribution (Non-Gasoline) Pt. 63, Subpt. EEEE... at an existing, reconstructed, or new affected source meeting any set of tank capacity and organic..., reconstructed, or new affected source meeting any set of tank capacity and organic HAP vapor pressure criteria...

  6. 40 CFR 63.345 - Provisions for new and reconstructed sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... electroplating, or chromium anodizing); (viii) A description of the air pollution control technique to be used to... National Emission Standards for Chromium Emissions From Hard and Decorative Chromium Electroplating and Chromium Anodizing Tanks § 63.345 Provisions for new and reconstructed sources. (a) This section identifies...

  7. 40 CFR 63.345 - Provisions for new and reconstructed sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... electroplating, or chromium anodizing); (viii) A description of the air pollution control technique to be used to... National Emission Standards for Chromium Emissions From Hard and Decorative Chromium Electroplating and Chromium Anodizing Tanks § 63.345 Provisions for new and reconstructed sources. (a) This section identifies...

  8. 40 CFR 63.345 - Provisions for new and reconstructed sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... completion dates of the construction or reconstruction; (vi) The anticipated date of (initial) startup of the affected source; (vii) The type of process operation to be performed (hard or decorative chromium... startup had not occurred before January 25, 1995, the notification shall be submitted as soon as...

  9. 40 CFR 63.345 - Provisions for new and reconstructed sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... completion dates of the construction or reconstruction; (vi) The anticipated date of (initial) startup of the affected source; (vii) The type of process operation to be performed (hard or decorative chromium... startup had not occurred before January 25, 1995, the notification shall be submitted as soon as...

  10. 40 CFR 63.345 - Provisions for new and reconstructed sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... completion dates of the construction or reconstruction; (vi) The anticipated date of (initial) startup of the affected source; (vii) The type of process operation to be performed (hard or decorative chromium... startup had not occurred before January 25, 1995, the notification shall be submitted as soon as...

  11. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  12. Numerical reconstruction of tsunami source using combined seismic, satellite and DART data

    NASA Astrophysics Data System (ADS)

    Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor

    2014-05-01

    Recent tsunamis, for instance, in Japan (2011), in Sumatra (2004), and at the Indian coast (2004) showed that a system of producing exact and timely information about tsunamis is of a vital importance. Numerical simulation is an effective instrument for providing such information. Bottom relief characteristics and the initial perturbation data (a tsunami source) are required for the direct simulation of tsunamis. The seismic data about the source are usually obtained in a few tens of minutes after an event has occurred (the seismic waves velocity being about five hundred kilometres per minute, while the velocity of tsunami waves is less than twelve kilometres per minute). A difference in the arrival times of seismic and tsunami waves can be used when operationally refining the tsunami source parameters and modelling expected tsunami wave height on the shore. The most suitable physical models related to the tsunamis simulation are based on the shallow water equations. The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate three different inverse problems of determining a tsunami source using three different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements, satellite wave-form images and seismic data. These problems are severely ill-posed. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analyzed. In numerical experiment we used gradient methods (Landweber iteration and conjugate gradient method) for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.

  13. Failed medial patellofemoral ligament reconstruction: Causes and surgical strategies

    PubMed Central

    Sanchis-Alfonso, Vicente; Montesinos-Berry, Erik; Ramirez-Fuentes, Cristina; Leal-Blanquet, Joan; Gelber, Pablo E; Monllau, Joan Carles

    2017-01-01

    Patellar instability is a common clinical problem encountered by orthopedic surgeons specializing in the knee. For patients with chronic lateral patellar instability, the standard surgical approach is to stabilize the patella through a medial patellofemoral ligament (MPFL) reconstruction. Foreseeably, an increasing number of revision surgeries of the reconstructed MPFL will be seen in upcoming years. In this paper, the causes of failed MPFL reconstruction are analyzed: (1) incorrect surgical indication or inappropriate surgical technique/patient selection; (2) a technical error; and (3) an incorrect assessment of the concomitant risk factors for instability. An understanding of the anatomy and biomechanics of the MPFL and cautiousness with the imaging techniques while favoring clinical over radiological findings and the use of common sense to determine the adequate surgical technique for each particular case, are critical to minimizing MPFL surgery failure. Additionally, our approach to dealing with failure after primary MPFL reconstruction is also presented. PMID:28251062

  14. Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2010-01-01

    Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.

  15. Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.

    PubMed

    Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos

    2011-07-01

    In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Dynamic Reconstruction Algorithm of Three-Dimensional Temperature Field Measurement by Acoustic Tomography

    PubMed Central

    Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.

    2017-01-01

    Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930

  17. Spectral CT Reconstruction with Image Sparsity and Spectral Mean

    PubMed Central

    Zhang, Yi; Xi, Yan; Yang, Qingsong; Cong, Wenxiang; Zhou, Jiliu

    2017-01-01

    Photon-counting detectors can acquire x-ray intensity data in different energy bins. The signal to noise ratio of resultant raw data in each energy bin is generally low due to the narrow bin width and quantum noise. To address this problem, here we propose an image reconstruction approach for spectral CT to simultaneously reconstructs x-ray attenuation coefficients in all the energy bins. Because the measured spectral data are highly correlated among the x-ray energy bins, the intra-image sparsity and inter-image similarity are important prior acknowledge for image reconstruction. Inspired by this observation, the total variation (TV) and spectral mean (SM) measures are combined to improve the quality of reconstructed images. For this purpose, a linear mapping function is used to minimalize image differences between energy bins. The split Bregman technique is applied to perform image reconstruction. Our numerical and experimental results show that the proposed algorithms outperform competing iterative algorithms in this context. PMID:29034267

  18. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  19. The possibility of biomasses and coal co-firing in the Czech Republic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juchelkova, D.

    1998-07-01

    The present state of the environment in the Czech Republic is influenced by many factors and one of them is the quality of fuel used in energetic sources. The greatest share is created by coal, burned with a low capacity, big power station blocks without a significant effort to reduce the emission of pollutants. It is possible to use other provisions for improving the environment: (1) primary--changing the burning fuel, the minimization of pollutant formation resulting in a burning process, the reconstruction of a significant part of burning equipment, and others; (2) or secondary--trapping pollutants. Changing the fuel must bemore » done, however, with minimization of outgoing pollutants. It should not burden surroundings with other undesirable influences and must provide the necessary output. One of the possibilities which is getting attention in the world today is the burning of biomasses. This solution itself has great investment cost (the necessity to build special burning equipment), but an attempt to burn suitable forms of biomass together with coal directly in the existing burning equipment has been discovered as a possible solution to this problem.« less

  20. Investigation on the reproduction performance versus acoustic contrast control in sound field synthesis.

    PubMed

    Bai, Mingsian R; Wen, Jheng-Ciang; Hsu, Hoshen; Hua, Yi-Hsin; Hsieh, Yu-Hao

    2014-10-01

    A sound reconstruction system is proposed for audio reproduction with extended sweet spot and reduced reflections. An equivalent source method (ESM)-based sound field synthesis (SFS) approach, with the aid of dark zone minimization is adopted in the study. Conventional SFS that is based on the free-field assumption suffers from synthesis error due to boundary reflections. To tackle the problem, the proposed system utilizes convex optimization in designing array filters with both reproduction performance and acoustic contrast taken into consideration. Control points are deployed in the dark zone to minimize the reflections from the walls. Two approaches are employed to constrain the pressure and velocity in the dark zone. Pressure matching error (PME) and acoustic contrast (AC) are used as performance measures in simulations and experiments for a rectangular loudspeaker array. Perceptual Evaluation of Audio Quality (PEAQ) is also used to assess the audio reproduction quality. The results show that the pressure-constrained (PC) method yields better acoustic contrast, but poorer reproduction performance than the pressure-velocity constrained (PVC) method. A subjective listening test also indicates that the PVC method is the preferred method in a live room.

  1. Planck 2015 results: XXII. A map of the thermal Sunyaev-Zeldovich effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghanim, N.; Arnaud, M.; Ashdown, M.

    In this article, we have constructed all-sky Compton parameters maps, y-maps, of the thermal Sunyaev-Zeldovich (tSZ) effect by applying specifically tailored component separation algorithms to the 30 to 857 GHz frequency channel maps from the Planck satellite. These reconstructed y-maps are delivered as part of the Planck 2015 release. The y-maps are characterized in terms of noise properties and residual foreground contamination, mainly thermal dust emission at large angular scales, and cosmic infrared background and extragalactic point sources at small angular scales. Specific masks are defined to minimize foreground residuals and systematics. Using these masks, we compute the y-map angularmore » power spectrum and higher order statistics. From these we conclude that the y-map is dominated by tSZ signal in the multipole range, 20« less

  2. Planck 2015 results: XXII. A map of the thermal Sunyaev-Zeldovich effect

    DOE PAGES

    Aghanim, N.; Arnaud, M.; Ashdown, M.; ...

    2016-09-20

    In this article, we have constructed all-sky Compton parameters maps, y-maps, of the thermal Sunyaev-Zeldovich (tSZ) effect by applying specifically tailored component separation algorithms to the 30 to 857 GHz frequency channel maps from the Planck satellite. These reconstructed y-maps are delivered as part of the Planck 2015 release. The y-maps are characterized in terms of noise properties and residual foreground contamination, mainly thermal dust emission at large angular scales, and cosmic infrared background and extragalactic point sources at small angular scales. Specific masks are defined to minimize foreground residuals and systematics. Using these masks, we compute the y-map angularmore » power spectrum and higher order statistics. From these we conclude that the y-map is dominated by tSZ signal in the multipole range, 20« less

  3. 3D reconstruction software comparison for short sequences

    NASA Astrophysics Data System (ADS)

    Strupczewski, Adam; Czupryński, BłaŻej

    2014-11-01

    Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.

  4. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less

  5. The inverse problem in electroencephalography using the bidomain model of electrical activity.

    PubMed

    Lopez Rincon, Alejandro; Shimoda, Shingo

    2016-12-01

    Acquiring information about the distribution of electrical sources in the brain from electroencephalography (EEG) data remains a significant challenge. An accurate solution would provide an understanding of the inner mechanisms of the electrical activity in the brain and information about damaged tissue. In this paper, we present a methodology for reconstructing brain electrical activity from EEG data by using the bidomain formulation. The bidomain model considers continuous active neural tissue coupled with a nonlinear cell model. Using this technique, we aim to find the brain sources that give rise to the scalp potential recorded by EEG measurements taking into account a non-static reconstruction. We simulate electrical sources in the brain volume and compare the reconstruction to the minimum norm estimates (MNEs) and low resolution electrical tomography (LORETA) results. Then, with the EEG dataset from the EEG Motor Movement/Imagery Database of the Physiobank, we identify the reaction to visual stimuli by calculating the time between stimulus presentation and the spike in electrical activity. Finally, we compare the activation in the brain with the registered activation using the LinkRbrain platform. Our methodology shows an improved reconstruction of the electrical activity and source localization in comparison with MNE and LORETA. For the Motor Movement/Imagery Database, the reconstruction is consistent with the expected position and time delay generated by the stimuli. Thus, this methodology is a suitable option for continuously reconstructing brain potentials. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  6. Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taguchi, Katsuyuki; Xu Jingyan; Srivastava, Somesh

    2011-03-15

    Purpose: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., ''Tiny a priori knowledge solves the interior problem in computed tomography'', Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. Methods: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, ''Compressed sensing basedmore » interior tomography'', Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., ''A general total variation minimization theorem for compressed sensing based interior tomography'', Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of {approx}500 mm. The projection data were truncated either moderately to limit the detector coverage to diameter 350 mm of the object or severely to cover diameter 199 mm. Images were reconstructed using the proposed method. Results: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. Conclusions: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that there exists a nearly piecewise constant subregion.« less

  7. Lateral orbital propeller flap technique for reconstruction of the lower eyelid defect.

    PubMed

    Ding, J-P; Chen, B; Yao, J

    2018-05-01

    The lower eyelid, which has a unique anatomy and esthetic importance, is a common site of basal cell carcinoma. The reconstruction of the defect after the wide excision of the tumour is a special concern of many plastic surgeons. How to achieve the most satisfying effect through minimal invasive is important for patients. We successfully applied the lateral orbital propeller flap for one-stage reconstruction of a large lower eyelid defect after tumour resection. We consider that this flap can achieve better tissue mobilisation as it provides effective coverage of soft tissue defects and thus is especially useful for repairing facial defects.

  8. Compressive Sensing via Nonlocal Smoothed Rank Function

    PubMed Central

    Fan, Ya-Ru; Liu, Jun; Zhao, Xi-Le

    2016-01-01

    Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683

  9. Reconstruction of reflectance data using an interpolation technique.

    PubMed

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  10. Optimization-Based Image Reconstruction with Artifact Reduction in C-Arm CBCT

    PubMed Central

    Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan

    2016-01-01

    We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g., data truncation; and the Chambolle-Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility. PMID:27694700

  11. Optimization-based image reconstruction with artifact reduction in C-arm CBCT

    NASA Astrophysics Data System (ADS)

    Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan

    2016-10-01

    We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g. data truncation; and the Chambolle-Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility.

  12. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    PubMed

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  13. Multienergy CT acquisition and reconstruction with a stepped tube potential scan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Le; Xing, Yuxiang, E-mail: xingyx@mail.tsinghua.edu.cn

    Purpose: Based on an energy-dependent property of matter, one may obtain a pseudomonochromatic attenuation map, a material composition image, an electron-density distribution, and an atomic number image using a dual- or multienergy computed tomography (CT) scan. Dual- and multienergy CT scans broaden the potential of x-ray CT imaging. The development of such systems is very useful in both medical and industrial investigations. In this paper, the authors propose a new dual- and multienergy CT system design (segmental multienergy CT, SegMECT) using an innovative scanning scheme that is conveniently implemented on a conventional single-energy CT system. The two-step-energy dual-energy CT canmore » be regarded as a special case of SegMECT. A special reconstruction method is proposed to support SegMECT. Methods: In their SegMECT, a circular trajectory in a CT scan is angularly divided into several arcs. The x-ray source is set to a different tube voltage for each arc of the trajectory. Thus, the authors only need to make a few step changes to the x-ray energy during the scan to complete a multienergy data acquisition. With such a data set, the image reconstruction might suffer from severe limited-angle artifacts if using conventional reconstruction methods. To solve the problem, they present a new prior-image-based reconstruction technique using a total variance norm of a quotient image constraint. On the one hand, the prior extracts structural information from all of the projection data. On the other hand, the effect from a possibly imprecise intensity level of the prior can be mitigated by minimizing the total variance of a quotient image. Results: The authors present a new scheme for a SegMECT configuration and establish a reconstruction method for such a system. Both numerical simulation and a practical phantom experiment are conducted to validate the proposed reconstruction method and the effectiveness of the system design. The results demonstrate that the proposed SegMECT can provide both attenuation images and material decomposition images of reasonable image quality. Compared to existing methods, the new system configuration demonstrates advantages in simplicity of implementation, system cost, and dose control. Conclusions: This proposed SegMECT imaging approach has great potential for practical applications. It can be readily realized on a conventional CT system.« less

  14. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  15. Local reconstruction in computed tomography of diffraction enhanced imaging

    NASA Astrophysics Data System (ADS)

    Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia

    2007-07-01

    Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.

  16. Laparoscopic transverse rectus abdominus flap delay for autogenous breast reconstruction.

    PubMed

    Kaddoura, I L; Khoury, G S

    1998-01-01

    Laparoscopic ligation of the deep and superficial inferior epigastric vessels was done for ten mastectomized patients who elected to have autogenous reconstruction of their breast. All these patients had at least one indication for the delay which included obesity, smoking, or requirement of a large volume of tissue for their reconstruction. The procedure did not add any morbidity or mortality to our patients and was found to be comparable to the "open" delay in preventing partial tissue loss in all but two patients. We describe the use of a minimally invasive procedure to augment the deep superior epigastric pedicled blood supply for the future transverse rectus abdominus flap. We have found in laparoscopic delay a safe, short procedure that is useful in high risk patients who choose the option of autologous breast reconstruction.

  17. Visualization of Topology through Simulation

    NASA Astrophysics Data System (ADS)

    Mulderig, Andrew; Beaucage, Gregory; Vogtt, Karsten; Jiang, Hanqiu

    Complex structures can be decomposed into their minimal topological description coupled with complications of tortuosity. We have found that a stick figure representation can account for the topological content of any structure and coupling with scaling measures of tortuosity we can reconstruct an object. This deconstruction is native to static small-angle scattering measurements where we can obtain quantitative measures of the tortuous structure and the minimal topological structure. For example, a crumpled sheet of paper is composed of a minimal sheet structure and parameters reflecting the extent of crumpling. This quantification yields information that can be used to calculate the hydrodynamic radius, radius of gyration, structural conductive pathway, modulus, and other properties of complex structures. The approach is general and has been applied to a wide range of nanostructures from crumpled graphene to branched polymers and unfolded proteins and RNA. In this poster we will demonstrate how simple structural simulations can be used to reconstruct from these parameters a 3d representation of the complex structure through a heuristic approach. Several examples will be given from nano-fractal aggregates.

  18. Current methods of diagnosis and management of ureteral injuries.

    PubMed

    Armenakas, N A

    1999-04-01

    A delay in diagnosis is the most important contributory factor in morbidity related to ureteral injury. The difficulty in making the diagnosis can be minimized by maintenance of a high index of suspicion and the timely performance of the appropriate radiographic and intraoperative evaluations. A decision on the timing of repair of the ureteral injury is based on the patient's overall condition, promptness of injury recognition, and proper injury staging. Ideally, when identified promptly, ureteral injuries should be repaired immediately. However, once there has been a delay in diagnosis or in the case of an unstable patient, temporizing measures can be used for urinary diversion. With the availability of simple, minimally invasive techniques to manage urinary extravasation and the absence of any risk of ureteral hemorrhage, ureteral reconstruction can be safely deferred until an opportune time during the recovery period. Successful surgical management requires familiarity with the broad reconstructive armamentarium and meticulous attention to the specific details of each procedure. Through adherence to the diagnostic and therapeutic principles outlined, complications can be minimized and renal preservation can be maximized in patients sustaining ureteral injuries.

  19. 40 CFR 74.46 - Opt-in source permanent shutdown, reconstruction, or change in affected status.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Opt-in source permanent shutdown, reconstruction, or change in affected status. 74.46 Section 74.46 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE OPT-INS Allowance Tracking and Transfer...

  20. 40 CFR 74.46 - Opt-in source permanent shutdown, reconstruction, or change in affected status.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Opt-in source permanent shutdown, reconstruction, or change in affected status. 74.46 Section 74.46 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE OPT-INS Allowance Tracking and Transfer...

  1. Nanoscale imaging with table-top coherent extreme ultraviolet source based on high harmonic generation

    NASA Astrophysics Data System (ADS)

    Ba Dinh, Khuong; Le, Hoang Vu; Hannaford, Peter; Van Dao, Lap

    2017-08-01

    A table-top coherent diffractive imaging experiment on a sample with biological-like characteristics using a focused narrow-bandwidth high harmonic source around 30 nm is performed. An approach involving a beam stop and a new reconstruction algorithm to enhance the quality of reconstructed the image is described.

  2. Effective learning strategies for real-time image-guided adaptive control of multiple-source hyperthermia applicators.

    PubMed

    Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva

    2010-03-01

    This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.

  3. Super-Resolution Imagery by Frequency Sweeping.

    DTIC Science & Technology

    1980-08-15

    IMAGE RETRIEVAL The above considerations of multiwavelength holography have lead us to determining a means by which the 3-D Fourier space of the...it at a distant bright point source. The point source used need not be derived from a laser. In fact it is preferable for safety purposes to use an LED ...noise and therefore higher reconstructed image quality can be attained by using nonlaser point sources in the reconstruction such as LED or miniature

  4. The Economy in Autologous Tissue Transfer: Part 1. The Kiss Flap Technique.

    PubMed

    Zhang, Yi Xin; Hayakawa, Thomas J; Levin, L Scott; Hallock, Geoffrey G; Lazzeri, Davide

    2016-03-01

    All reconstructive microsurgeons realize the need to improve aesthetic and functional donor-site outcomes. A "kiss" flap design concept was developed to increase the surface area of skin flap coverage while minimizing donor-site morbidity. The main goal of the kiss flap technique is to harvest multiple skin paddles that are smaller than those raised with traditional techniques, to minimize donor-site morbidity. These smaller flap components are then sutured to each other, or said to kiss each other side-by-side, to create a large, wide flap. The skin paddles in the kiss technique can be linked to one another by a variety of native intrinsic vascular connections, by additional microanastomosis, or both. This technique can be widely applied to both free and pedicle flaps, and essentially allows for the reconstruction of a large defect while providing the easy primary closure of a smaller donor-site defect. According to their origin of blood supply, kiss flaps are classified into three styles and five types. All of the different types of kiss flaps are unique in both flap design and harvest technique. Most kiss flaps are based on common flaps already familiar to the reconstructive surgeon. The basis of the kiss flap design concept is to convert multiple narrow flaps into a single unified flap of the desired greater width. This maximizes the size of the resulting flap and minimizes donor-site morbidity, as a direct linear closure is usually possible. Therapeutic, V.

  5. Joint Chroma Subsampling and Distortion-Minimization-Based Luma Modification for RGB Color Images With Application.

    PubMed

    Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao

    2017-10-01

    In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.

  6. Robotics-based synthesis of human motion.

    PubMed

    Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  7. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  8. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  9. Investigation into image quality difference between total variation and nonlinear sparsifying transform based compressed sensing

    NASA Astrophysics Data System (ADS)

    Dong, Jian; Kudo, Hiroyuki

    2017-03-01

    Compressed sensing (CS) is attracting growing concerns in sparse-view computed tomography (CT) image reconstruction. The most standard approach of CS is total variation (TV) minimization. However, images reconstructed by TV usually suffer from distortions, especially in reconstruction of practical CT images, in forms of patchy artifacts, improper serrate edges and loss of image textures. Most existing CS approaches including TV achieve image quality improvement by applying linear transforms to object image, but linear transforms usually fail to take discontinuities into account, such as edges and image textures, which is considered to be the key reason for image distortions. Actually, discussions on nonlinear filter based image processing has a long history, leading us to clarify that the nonlinear filters yield better results compared to linear filters in image processing task such as denoising. Median root prior was first utilized by Alenius as nonlinear transform in CT image reconstruction, with significant gains obtained. Subsequently, Zhang developed the application of nonlocal means-based CS. A fact is gradually becoming clear that the nonlinear transform based CS has superiority in improving image quality compared with the linear transform based CS. However, it has not been clearly concluded in any previous paper within the scope of our knowledge. In this work, we investigated the image quality differences between the conventional TV minimization and nonlinear sparsifying transform based CS, as well as image quality differences among different nonlinear sparisying transform based CSs in sparse-view CT image reconstruction. Additionally, we accelerated the implementation of nonlinear sparsifying transform based CS algorithm.

  10. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    NASA Astrophysics Data System (ADS)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  11. An iterative algorithm for soft tissue reconstruction from truncated flat panel projections

    NASA Astrophysics Data System (ADS)

    Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.

    2006-03-01

    The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.

  12. Multiple-digit resurfacing using a thin latissimus dorsi perforator flap.

    PubMed

    Kim, Sang Wha; Lee, Ho Jun; Kim, Jeong Tae; Kim, Youn Hwan

    2014-01-01

    Traumatic digit defects of high complexity and with inadequate local tissue represent challenging surgical problems. Recently, perforator flaps have been proposed for reconstructing large defects of the hand because of their thinness and pliability and minimal donor site morbidity. Here, we illustrate the use of thin latissimus dorsi perforator flaps for resurfacing multiple defects of distal digits. We describe the cases of seven patients with large defects, including digits, circumferential defects and multiple-digit defects, who underwent reconstruction with thin latissimus dorsi perforator flaps between January 2008 and March 2012. Single-digit resurfacing procedures were excluded. The mean age was 56.3 years and the mean flap size was 160.4 cm(2). All the flaps survived completely. Two patients had minor complications including partial flap loss and scar contracture. The mean follow-up period was 11.7 months. The ideal flap for digit resurfacing should be thin and amenable to moulding, have a long pedicle for microanastomosis and have minimal donor site morbidity. Thin flaps can be harvested by excluding the deep adipose layer, and their high pliability enables resurfacing without multiple debulking procedures. The latissimus dorsi perforator flap may be the best flap for reconstructing complex defects of the digits, such as large, multiple-digit or circumferential defects, which require complete wrapping of volar and dorsal surfaces. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  13. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  14. Single photon emission computed tomography-guided Cerenkov luminescence tomography

    NASA Astrophysics Data System (ADS)

    Hu, Zhenhua; Chen, Xueli; Liang, Jimin; Qu, Xiaochao; Chen, Duofang; Yang, Weidong; Wang, Jing; Cao, Feng; Tian, Jie

    2012-07-01

    Cerenkov luminescence tomography (CLT) has become a valuable tool for preclinical imaging because of its ability of reconstructing the three-dimensional distribution and activity of the radiopharmaceuticals. However, it is still far from a mature technology and suffers from relatively low spatial resolution due to the ill-posed inverse problem for the tomographic reconstruction. In this paper, we presented a single photon emission computed tomography (SPECT)-guided reconstruction method for CLT, in which a priori information of the permissible source region (PSR) from SPECT imaging results was incorporated to effectively reduce the ill-posedness of the inverse reconstruction problem. The performance of the method was first validated with the experimental reconstruction of an adult athymic nude mouse implanted with a Na131I radioactive source and an adult athymic nude mouse received an intravenous tail injection of Na131I. A tissue-mimic phantom based experiment was then conducted to illustrate the ability of the proposed method in resolving double sources. Compared with the traditional PSR strategy in which the PSR was determined by the surface flux distribution, the proposed method obtained much more accurate and encouraging localization and resolution results. Preliminary results showed that the proposed SPECT-guided reconstruction method was insensitive to the regularization methods and ignored the heterogeneity of tissues which can avoid the segmentation procedure of the organs.

  15. Impact of sentinel lymph node biopsy on immediate breast reconstruction after mastectomy.

    PubMed

    Wood, Benjamin C; David, Lisa R; Defranzo, Anthony J; Stewart, John H; Shen, Perry; Geisinger, Kim R; Marks, Malcolm W; Levine, Edward A

    2009-07-01

    Traditionally, sentinel lymph node biopsy (SLNB) is performed at the time of mastectomy and reconstruction. However, several groups have advocated SLNB as a separate outpatient procedure before mastectomy, when immediate reconstruction is planned, to allow for complete pathologic evaluation. The purpose of this study was to determine the impact of intraoperative analysis of SLNB on the reconstructive plan when performed at the same time as definitive surgery. A retrospective review was conducted of all mastectomy cases performed at a single institution between September 1998 and November 2007. Of the 747 mastectomy cases reviewed, SLNB was conducted in 344 cases, and there was immediate breast reconstruction in 193 of those cases. There were 27 (7.8%) false negative and three (0.9%) false positive intraoperative analysis of SLNB. Touch preparation analysis from the SLNB changed the reconstructive plan in four (2.1%) cases. In our experience, SLNB can be performed at the time of mastectomy with minimal impact on the reconstructive plan. A staged approach incurs significant additional expense, increases the delay in initiation of systemic therapy and the propensity of procedure-related morbidity; therefore, SLNB should not be performed as a separate procedure before definitive surgery with immediate breast reconstruction.

  16. 360° Fourier transform profilometry in surface reconstruction for fluorescence molecular tomography.

    PubMed

    Shi, Bi'er; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-05-01

    Fluorescence molecular tomography (FMT) is an emerging tool in the observation of diseases. A fast and accurate surface reconstruction of the experimental object is needed as a boundary constraint for FMT reconstruction. In this paper, an automatic, noncontact, and 3-D surface reconstruction method named 360◦ Fourier transform profilometry (FTP) is proposed to reconstruct 3-D surface profiles for FMT system. This method can reconstruct 360◦ integrated surface profiles utilizing the single-frame FTP at different angles. Results show that the relative mean error of the surface reconstruction of this method is less than 1.4% in phantom experiments, and is no more than 2.9% in mouse experiments in vivo. Compared with the Radon transform method, the proposed method reduces the computation time by more than 90% with a minimal error increase. At last, a combined 360◦ FTP/FMT experiment is conducted on a nude mouse. Not only can the 360◦ FTP system operate with the FMT system simultaneously, but it can also help to monitor the status of animals. Moreover, the 360◦ FTP system is independent of FMT system and can be performed to reconstruct the surface by itself.

  17. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  18. Imaging diffusive media using time-independent and time-harmonic sources: dependence of image quality on imaging algorithms, target volume, weight matrix, and view angles

    NASA Astrophysics Data System (ADS)

    Chang, Jenghwa; Aronson, Raphael; Graber, Harry L.; Barbour, Randall L.

    1995-05-01

    We present results examining the dependence of image quality for imaging in dense scattering media as influenced by the choice of parameters pertaining to the physical measurement and factors influencing the efficiency of the computation. The former includes the density of the weight matrix as affected by the target volume, view angle, and source condition. The latter includes the density of the weight matrix and type of algorithm used. These were examined by solving a one-step linear perturbation equation derived from the transport equation using three different algorithms: POCS, CGD, and SART algorithms with contraints. THe above were explored by evaluating four different 3D cylindrical phantom media: a homogeneous medium, an media containing a single black rod on the axis, a single black rod parallel to the axis, and thirteen black rods arrayed in the shape of an 'X'. Solutions to the forward problem were computed using Monte Carlo methods for an impulse source, from which was calculated time- independent and time harmonic detector responses. The influence of target volume on image quality and computational efficiency was studied by computing solution to three types of reconstructions: 1) 3D reconstruction, which considered each voxel individually, 2) 2D reconstruction, which assumed that symmetry along the cylinder axis was know a proiri, 3) 2D limited reconstruction, which assumed that only those voxels in the plane of the detectors contribute information to the detecot readings. The effect of view angle was explored by comparing computed images obtained from a single source, whose position was varied, as well as for the type of tomographic measurement scheme used (i.e., radial scan versus transaxial scan). The former condition was also examined for the dependence of the above on choice of source condition [ i.e., cw (2D reconstructions) versus time-harmonic (2D limited reconstructions) source]. The efficiency of the computational effort was explored, principally, by conducting a weight matrix 'threshold titration' study. This involved computing the ratio of each matrix element to the maximum element of its row and setting this to zero if the ratio was less than a preselected threshold. Results obtained showed that all three types of reconstructions provided good image quality. The 3D reconstruction outperformed the other two reconstructions. The time required for 2D and 2D limited reconstruction is much less (< 10%) than that for the 3D reconstruction. The 'threshold titration' study shows that artifacts were present when the threshold was 5% or higher, and no significant differences of image quality were observed when the thresholds were less tha 1%, in which case 38% (21,849 of 57,600) of the total weight elements were set to zero. Restricting the view angle produced degradation in image quality, but, in all cases, clearly recognizable images were obtained.

  19. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously published project reports to depict strategies and types and three case studies of 3D reconstruction projects to evaluate evolutionary processes during such projects. The research showed that reconstructions of no longer existing historic structures are most commonly used for presentation or research purposes of large buildings or city models. Additionally, they are often realized by interdisciplinary workgroups using images as the most important source for reconstruction as far as important media for communication and quality control during the reconstruction process.

  20. Influence of the Pixel Sizes of Reference Computed Tomography on Single-photon Emission Computed Tomography Image Reconstruction Using Conjugate-gradient Algorithm.

    PubMed

    Okuda, Kyohei; Sakimoto, Shota; Fujii, Susumu; Ida, Tomonobu; Moriyama, Shigeru

    The frame-of-reference using computed-tomography (CT) coordinate system on single-photon emission computed tomography (SPECT) reconstruction is one of the advanced characteristics of the xSPECT reconstruction system. The aim of this study was to reveal the influence of the high-resolution frame-of-reference on the xSPECT reconstruction. 99m Tc line-source phantom and National Electrical Manufacturers Association (NEMA) image quality phantom were scanned using the SPECT/CT system. xSPECT reconstructions were performed with the reference CT images in different sizes of the display field-of-view (DFOV) and pixel. The pixel sizes of the reconstructed xSPECT images were close to 2.4 mm, which is acquired as originally projection data, even if the reference CT resolution was varied. The full width at half maximum (FWHM) of the line-source, absolute recovery coefficient, and background variability of image quality phantom were independent on the sizes of DFOV in the reference CT images. The results of this study revealed that the image quality of the reconstructed xSPECT images is not influenced by the resolution of frame-of-reference on SPECT reconstruction.

  1. Smartphone based scalable reverse engineering by digital image correlation

    NASA Astrophysics Data System (ADS)

    Vidvans, Amey; Basu, Saurabh

    2018-03-01

    There is a need for scalable open source 3D reconstruction systems for reverse engineering. This is because most commercially available reconstruction systems are capital and resource intensive. To address this, a novel reconstruction technique is proposed. The technique involves digital image correlation based characterization of surface speeds followed by normalization with respect to angular speed during rigid body rotational motion of the specimen. Proof of concept of the same is demonstrated and validated using simulation and empirical characterization. Towards this, smart-phone imaging and inexpensive off the shelf components along with those fabricated additively using poly-lactic acid polymer with a standard 3D printer are used. Some sources of error in this reconstruction methodology are discussed. It is seen that high curvatures on the surface suppress accuracy of reconstruction. Reasons behind this are delineated in the nature of the correlation function. Theoretically achievable resolution during smart-phone based 3D reconstruction by digital image correlation is derived.

  2. 3D Reconstruction of a Fluvial Sediment Slug from Source to Sink: reach-scale modeling of the Dart River, NZ

    NASA Astrophysics Data System (ADS)

    Brasington, J.; Cook, S.; Cox, S.; James, J.; Lehane, N.; McColl, S. T.; Quincey, D. J.; Williams, R. D.

    2014-12-01

    Following heavy rainfall on 4/1/14, a debris flow at Slip Stream (44.59 S 168.34 E) introduced >106 m3 of sediment to the Dart River valley floor in NZ Southern Alps. Runout over an existing fan dammed the Dart River causing a sudden drop in discharge downstream. This broad dam was breached quickly; however the temporary loss of conveyance impounded a 3 km lake with a volume of 6 x 106 m3 and depths that exceed 10 m. Quantifying the impact of this large sediment pulse on the Dart River is urgently needed to assess potential sedimentation downstream and will also provide an ideal vehicle to test theories of bed wave migration in large, extensively braided rivers. Recent advances in geomatics offer the opportunity to study these impacts directly through the production of high-resolution DEMs. These 3D snapshots can then be compared through time to quantify the morphodynamic response of the channel as it adjusts to the change in sediment supply. In this study we describe the methods and results of a novel survey strategy designed to capture of the complex morphology of the Dart River along a remote 40 km reach, from the upstream landslide source to its distal sediment sink in Lake Wakatipu. The scale of this system presents major logistical and methodological challenges, and hitherto would have conventionally be addressed with airborne laser scanning, bringing with it significant deployment constraints and costs. By contrast, we present sub-metre 3D reconstructions of the system (Figure 1), derived from highly redundant aerial photography shot with a non-metric camera from a helicopter survey that extended over an 80 km2 area. Structure-from-Motion photogrammetry was used to solve simultaneously camera position, pose and derive a 3D point cloud based on over 4000 images. Reconstructions were found to exhibit significant systematic error resulting from the implicit estimation of the internal camera orientation parameters, and we show how these effects can be minimized by optimizing the lens calibration before and after scene reconstruction using both external constraints and refined camera models. An analysis of DEM uncertainty, undertaken through comparison with long-range TLS data, demonstrates the potential for this low-cost survey strategy to generate models superior to conventional laser swath mapping even over large areas.

  3. 40 CFR 63.5425 - When must I start recordkeeping to determine my compliance ratio?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) and (2) of this section: (1) If the startup of your new or reconstructed affected source is before... February 27, 2002. (2) If the startup of your new or reconstructed affected source is after February 27, 2002, then you must start recordkeeping to determine your compliance ratio upon startup of your...

  4. 40 CFR 63.5425 - When must I start recordkeeping to determine my compliance ratio?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) and (2) of this section: (1) If the startup of your new or reconstructed affected source is before... February 27, 2002. (2) If the startup of your new or reconstructed affected source is after February 27, 2002, then you must start recordkeeping to determine your compliance ratio upon startup of your...

  5. 40 CFR 63.5425 - When must I start recordkeeping to determine my compliance ratio?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) and (2) of this section: (1) If the startup of your new or reconstructed affected source is before... February 27, 2002. (2) If the startup of your new or reconstructed affected source is after February 27, 2002, then you must start recordkeeping to determine your compliance ratio upon startup of your...

  6. 40 CFR 63.9495 - When do I have to comply with this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... October 18, 2005. (b) If you have a new or reconstructed solvent mixer and its initial startup date is... initial startup. (c) If your friction materials manufacturing facility is an area source that increases... reconstructed sources upon startup or no later than October 18, 2002, whichever is later. (2) For any portion of...

  7. Source Plane Reconstruction of the Bright Lensed Galaxy RCSGA 032727-132609

    NASA Technical Reports Server (NTRS)

    Sharon, Keren; Gladders, Michael D.; Rigby, Jane R.; Wuyts, Eva; Koester, Benjamin P.; Bayliss, Matthew B.; Barrientos, L. Felipe

    2011-01-01

    We present new HST/WFC3 imaging data of RCS2 032727-132609, a bright lensed galaxy at z=1.7 that is magnified and stretched by the lensing cluster RCS2 032727-132623. Using this new high-resolution imaging, we modify our previous lens model (which was based on ground-based data) to fully understand the lensing geometry, and use it to reconstruct the lensed galaxy in the source plane. This giant arc represents a unique opportunity to peer into 100-pc scale structures in a high redshift galaxy. This new source reconstruction will be crucial for a future analysis of the spatially-resolved rest-UV and rest-optical spectra of the brightest parts of the arc.

  8. Q-Space Truncation and Sampling in Diffusion Spectrum Imaging

    PubMed Central

    Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D.; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L.; McNab, Jennifer A.

    2015-01-01

    Purpose To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). Methods DSI data were acquired using the MGH-USC connectome scanner (Gmax=300mT/m) with bmax=30,000s/mm2, 17×17×17, 15×15×15 and 11×11×11 grids in ex vivo human brains and bmax=10,000s/mm2, 11×11×11 grid in vivo. An additional in vivo scan using bmax=7,000s/mm2, 11×11×11 grid was performed with a derated gradient strength of 40mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Results Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. Conclusion The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11×11×11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. PMID:26762670

  9. Mycobacterium fortuitum Infection following Reconstructive Breast Surgery: Differentiation from Classically Described Red Breast Syndrome.

    PubMed

    Cicilioni, Orlando J; Foles, Van Brandon; Sieger, Barry; Musselman, Kelly

    2013-10-01

    Red breast syndrome (RBS) has been described as an erythema that may be associated with 2-stage prosthetic reconstructive breast surgery using biologic mesh. RBS is differentiated from infectious cellulitis through absence of fever and laboratory abnormalities and usually has a self-limiting course. There have been no clinical reports on etiology, risk factors, or management of RBS. This report describes patient data that raise the need to rule out mycobacterial infection when RBS is being considered as a diagnosis. We present 6 cases of Mycobacterium fortuitum infection occurring after prosthetic breast reconstruction performed with a human-derived acellular dermal matrix, including the timing and course of erythema, laboratory results, treatments used, and long-term outcomes. We also describe the differential diagnoses of RBS in the context of these cases, including emergence of acid-fast bacilli and diagnostic and treatment considerations. Exact two-tailed 95% confidence intervals based on the F-distribution are provided with estimates of the incidence rates of infection. The 6 cases presented here do not fit the typical description of RBS and were caused by mycobacterium infection. Statistical evaluation of the estimated incidence rate of M. fortuitum infection in a patient thought to have RBS, which occurred 100% of the time in this series, revealed a 95% confidence interval of 54.1-100%. When presented with possible RBS, surgeons must rule out cellulitis, culture for acid-fast bacilli such as mycobacterium species, and then determine the best course of treatment. Patient counseling regarding potential household sources of infection is warranted to minimize postoperative infection risk.

  10. Material Characterization of Field-Cast Connection Grouts : TechBrief

    DOT National Transportation Integrated Search

    2013-01-01

    There is a growing need for durable and resilient highway bridge construction/reconstruction systems that facilitate rapid completion of onsite activities, thus minimizing intrusion on the traveling public. Modular components can provide highquality,...

  11. Local government pavement research, development, and implementation organization in several states.

    DOT National Transportation Integrated Search

    2017-04-01

    Californias local governments face a growing backlog of projects and need new approaches to : reduce the costs of pavement preservation, maintenance, rehabilitation, and reconstruction : while also minimizing environmental impacts. The majority of...

  12. 76 FR 31005 - Information Collection Available for Public Comments and Recommendations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-27

    ... of debt obligations issued to finance or refinance the construction or reconstruction of vessels. In..., ways to minimize this burden, and ways to enhance the quality, utility, and clarity of the information...

  13. Refraction corrected calibration for aquatic locomotion research: application of Snell's law improves spatial accuracy.

    PubMed

    Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L

    2015-07-07

    Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.

  14. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  15. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  16. B-Scan Based Acoustic Source Reconstruction for Magnetoacoustic Tomography with Magnetic Induction (MAT-MI)

    PubMed Central

    Mariappan, Leo; Li, Xu; He, Bin

    2011-01-01

    We present in this study an acoustic source reconstruction method using focused transducer with B mode imaging for magnetoacoustic tomography with magnetic induction (MAT-MI). MAT-MI is an imaging modality proposed for non-invasive conductivity imaging with high spatial resolution. In MAT-MI acoustic sources are generated in a conductive object by placing it in a static and a time-varying magnetic field. The acoustic waves from these sources propagate in all directions and are collected with transducers placed around the object. The collected signal is then usedto reconstruct the acoustic source distribution and to further estimate the electrical conductivity distribution of the object. A flat piston transducer acting as a point receiver has been used in previous MAT-MI systems to collect acoustic signals. In the present study we propose to use B mode scan scheme with a focused transducer that gives a signal gain in its focus region and improves the MAT-MI signal quality. A simulation protocol that can take into account different transducer designs and scan schemes for MAT-MI imaging is developed and used in our evaluation of different MAT-MI system designs. It is shown in our computer simulations that, as compared to the previous approach, the MAT-MI system using B-scan with a focused transducer allows MAT-MI imaging at a closer distance and has improved system sensitivity. In addition, the B scan imaging technique allows reconstruction of the MAT-MI acoustic sources with a discrete number of scanning locations which greatly increases the applicability of the MAT-MI approach especially when a continuous acoustic window is not available in real clinical applications. We have also conducted phantom experiments to evaluate the proposed method and the reconstructed image shows a good agreement with the target phantom. PMID:21097372

  17. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less

  18. Novel fusion for hybrid optical/microcomputed tomography imaging based on natural light surface reconstruction and iterated closest point

    NASA Astrophysics Data System (ADS)

    Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo

    2014-02-01

    In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.

  19. Optimized x-ray source scanning trajectories for iterative reconstruction in high cone-angle tomography

    NASA Astrophysics Data System (ADS)

    Kingston, Andrew M.; Myers, Glenn R.; Latham, Shane J.; Li, Heyang; Veldkamp, Jan P.; Sheppard, Adrian P.

    2016-10-01

    With the GPU computing becoming main-stream, iterative tomographic reconstruction (IR) is becoming a com- putationally viable alternative to traditional single-shot analytical methods such as filtered back-projection. IR liberates one from the continuous X-ray source trajectories required for analytical reconstruction. We present a family of novel X-ray source trajectories for large-angle CBCT. These discrete (sparsely sampled) trajectories optimally fill the space of possible source locations by maximising the degree of mutually independent information. They satisfy a discrete equivalent of Tuy's sufficiency condition and allow high cone-angle (high-flux) tomog- raphy. The highly isotropic nature of the trajectory has several advantages: (1) The average source distance is approximately constant throughout the reconstruction volume, thus avoiding the differential-magnification artefacts that plague high cone-angle helical computed tomography; (2) Reduced streaking artifacts due to e.g. X-ray beam-hardening; (3) Misalignment and component motion manifests as blur in the tomogram rather than double-edges, which is easier to automatically correct; (4) An approximately shift-invariant point-spread-function which enables filtering as a pre-conditioner to speed IR convergence. We describe these space-filling trajectories and demonstrate their above-mentioned properties compared with a traditional helical trajectories.

  20. A 3D reconstruction algorithm for magneto-acoustic tomography with magnetic induction based on ultrasound transducer characteristics.

    PubMed

    Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-12-21

    In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the reconstructed image. This study can be applied to analyse the effect of the position of the transducer and the scanning geometry on imaging. It may provide a more precise method to reconstruct the conductivity distribution in MAT-MI.

  1. Probing the genome-scale metabolic landscape of Bordetella pertussis, the causative agent of whooping cough.

    PubMed

    Branco Dos Santos, Filipe; Olivier, Brett G; Boele, Joost; Smessaert, Vincent; De Rop, Philippe; Krumpochova, Petra; Klau, Gunnar W; Giera, Martin; Dehottay, Philippe; Teusink, Bas; Goffin, Philippe

    2017-08-25

    Whooping cough is a highly-contagious respiratory disease caused by Bordetella pertussi s. Despite vaccination, its incidence has been rising alarmingly, and yet, the physiology of B. pertussis remains poorly understood. We combined genome-scale metabolic reconstruction, a novel optimization algorithm and experimental data to probe the full metabolic potential of this pathogen, using strain Tohama I as a reference. Experimental validation showed that B. pertussis secretes a significant proportion of nitrogen as arginine and purine nucleosides, which may contribute to modulation of the host response. We also found that B. pertussis can be unexpectedly versatile, being able to metabolize many compounds while displaying minimal nutrient requirements. It can grow without cysteine - using inorganic sulfur sources such as thiosulfate - and it can grow on organic acids such as citrate or lactate as sole carbon sources, providing in vivo demonstration that its TCA cycle is functional. Although the metabolic reconstruction of eight additional strains indicates that the structural genes underlying this metabolic flexibility are widespread, experimental validation suggests a role of strain-specific regulatory mechanisms in shaping metabolic capabilities. Among five alternative strains tested, three were shown to grow on substrate combinations requiring a functional TCA cycle, but only one could use thiosulfate. Finally, the metabolic model was used to rationally design growth media with over two-fold improvements in pertussis toxin production. This study thus provides novel insights into B. pertussis physiology, and highlights the potential, but also limitations of models solely based on metabolic gene content. IMPORTANCE The metabolic capabilities of Bordetella pertussis - the causative agent of whooping cough - were investigated from a systems-level perspective. We constructed a comprehensive genome-scale metabolic model for B. pertussis , and challenged its predictions experimentally. This systems approach shed light on new potential host-microbe interactions, and allowed to rationally design novel growth media with over two-fold improvements in pertussis toxin production. Most importantly, we also uncovered the potential for metabolic flexibility of B. pertussis (significantly larger range of substrates than previously alleged; novel active pathways allowing growth in minimal, nearly mineral nutrient combinations where only the carbon source must be organic), although our results also highlight the importance of strain-specific regulatory determinants in shaping metabolic capabilities. Deciphering the underlying regulatory mechanisms appears crucial for a comprehensive understanding of B. pertussis 's lifestyle and the epidemiology of whooping cough. The contribution of metabolic models in this context will require the extension of the genome-scale metabolic model to integrate this regulatory dimension. Copyright © 2017 Branco dos Santos et al.

  2. Probing the Genome-Scale Metabolic Landscape of Bordetella pertussis, the Causative Agent of Whooping Cough

    PubMed Central

    Olivier, Brett G.; Boele, Joost; Smessaert, Vincent; De Rop, Philippe; Krumpochova, Petra; Klau, Gunnar W.; Giera, Martin; Dehottay, Philippe; Goffin, Philippe

    2017-01-01

    ABSTRACT Whooping cough is a highly contagious respiratory disease caused by Bordetella pertussis. Despite widespread vaccination, its incidence has been rising alarmingly, and yet, the physiology of B. pertussis remains poorly understood. We combined genome-scale metabolic reconstruction, a novel optimization algorithm, and experimental data to probe the full metabolic potential of this pathogen, using B. pertussis strain Tohama I as a reference. Experimental validation showed that B. pertussis secretes a significant proportion of nitrogen as arginine and purine nucleosides, which may contribute to modulation of the host response. We also found that B. pertussis can be unexpectedly versatile, being able to metabolize many compounds while displaying minimal nutrient requirements. It can grow without cysteine, using inorganic sulfur sources, such as thiosulfate, and it can grow on organic acids, such as citrate or lactate, as sole carbon sources, providing in vivo demonstration that its tricarboxylic acid (TCA) cycle is functional. Although the metabolic reconstruction of eight additional strains indicates that the structural genes underlying this metabolic flexibility are widespread, experimental validation suggests a role of strain-specific regulatory mechanisms in shaping metabolic capabilities. Among five alternative strains tested, three strains were shown to grow on substrate combinations requiring a functional TCA cycle, but only one strain could use thiosulfate. Finally, the metabolic model was used to rationally design growth media with >2-fold improvements in pertussis toxin production. This study thus provides novel insights into B. pertussis physiology and highlights the potential, but also the limitations, of models based solely on metabolic gene content. IMPORTANCE The metabolic capabilities of Bordetella pertussis, the causative agent of whooping cough, were investigated from a systems-level perspective. We constructed a comprehensive genome-scale metabolic model for B. pertussis and challenged its predictions experimentally. This systems approach shed light on new potential host-microbe interactions and allowed us to rationally design novel growth media with >2-fold improvements in pertussis toxin production. Most importantly, we also uncovered the potential for metabolic flexibility of B. pertussis (significantly larger range of substrates than previously alleged; novel active pathways allowing growth in minimal, nearly mineral nutrient combinations where only the carbon source must be organic), although our results also highlight the importance of strain-specific regulatory determinants in shaping metabolic capabilities. Deciphering the underlying regulatory mechanisms appears to be crucial for a comprehensive understanding of B. pertussis's lifestyle and the epidemiology of whooping cough. The contribution of metabolic models in this context will require the extension of the genome-scale metabolic model to integrate this regulatory dimension. PMID:28842544

  3. Multi-threaded Event Processing with DANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Lawrence; Elliott Wolin

    2007-05-14

    The C++ data analysis framework DANA has been written to support the next generation of Nuclear Physics experiments at Jefferson Lab commensurate with the anticipated 12GeV upgrade. The DANA framework was designed to allow multi-threaded event processing with a minimal impact on developers of reconstruction software. This document describes how DANA implements multi-threaded event processing and compares it to simply running multiple instances of a program. Also presented are relative reconstruction rates for Pentium4, Xeon, and Opteron based machines.

  4. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  5. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  6. Determining biosonar images using sparse representations.

    PubMed

    Fontaine, Bertrand; Peremans, Herbert

    2009-05-01

    Echolocating bats are thought to be able to create an image of their environment by emitting pulses and analyzing the reflected echoes. In this paper, the theory of sparse representations and its more recent further development into compressed sensing are applied to this biosonar image formation task. Considering the target image representation as sparse allows formulation of this inverse problem as a convex optimization problem for which well defined and efficient solution methods have been established. The resulting technique, referred to as L1-minimization, is applied to simulated data to analyze its performance relative to delay accuracy and delay resolution experiments. This method performs comparably to the coherent receiver for the delay accuracy experiments, is quite robust to noise, and can reconstruct complex target impulse responses as generated by many closely spaced reflectors with different reflection strengths. This same technique, in addition to reconstructing biosonar target images, can be used to simultaneously localize these complex targets by interpreting location cues induced by the bat's head related transfer function. Finally, a tentative explanation is proposed for specific bat behavioral experiments in terms of the properties of target images as reconstructed by the L1-minimization method.

  7. Joint reconstruction of dynamic PET activity and kinetic parametric images using total variation constrained dictionary sparse coding

    NASA Astrophysics Data System (ADS)

    Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng

    2017-05-01

    Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.

  8. Application of Digital Diagnosis and Treatment Technique in Benign Mandibular Diseases.

    PubMed

    Ju, Rui; Zeng, Wei; Lian, Xiaotian; Chen, Gang; Yin, Huaqiang; Tang, Wei

    2018-05-01

    To explore the feasibility of preoperative planning for treatment of benign mandibular lesions (BML) using digital technologies such as three-dimensional (3D) reconstruction, measurement, visualization as well as image contrast and design of neural positioning protection template (NPPT) in combination with 3D printing technology in the BML diagnosis and treatment. The 3D models of BML and inferior alveolar nerves (IAN) of 10 BML patients were reconstructed based on their digital imaging and communications in medicine (DICOM) data using MIMICS16.0 software. The models were used to visualize lesions and nerve contrast measurement and guide design of personalized NPPT and osteotomy after operation modality was determined in order to achieve accurate, minimally invasive operation with shortened intraoperative time. Intraoperative NPPT application could accurately locate lesions and their scope and assist osteotomy. The measurement results were consistent with those of preoperative reconstruction and measurement. The BML were curetted completely without damage IAN. The 10 BML patients had no numbness and other discomforts in the lower lip and mandibular teeth after operation. The digital diagnosis and treatment technology is an effective method for functional treatment of BML patients and its application could achieve personalized, minimally invasive and precise treatment and save intraoperation time.

  9. Mandibular Tissue Engineering: Past, Present, Future.

    PubMed

    Konopnicki, Sandra; Troulis, Maria J

    2015-12-01

    Almost 2 decades ago, the senior author's (M.T.J.) first article was with our mentor, Dr Leonard B. Kaban, a review article titled "Distraction Osteogenesis: Past, Present, Future." In 1998, many thought it would be impossible to have a remotely activated, small, curvilinear distractor that could be placed using endoscopic techniques. Currently, a U.S. patent for a curvilinear automated device and endoscopic techniques for minimally invasive access for jaw reconstruction exist. With minimally invasive access for jaw reconstruction, the burden to decrease donor site morbidity has increased. Distraction osteogenesis (DO) is an in vivo form of tissue engineering. The DO technique eliminates a donor site, is less invasive, requires a shorter operative time than usual procedures, and can be used for multiple reconstruction applications. Tissue engineering could further reduce morbidity and cost and increase treatment availability. The purpose of the present report was to review our experience with tissue engineering of bone: the past, present, and our vision for the future. The present report serves as a tribute to our mentor and acknowledges Dr Kaban for his incessant tutelage, guidance, wisdom, and boundless vision. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  11. 40 CFR 60.706 - Reconstruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Volatile Organic Compound Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Reactor Processes § 60.706 Reconstruction. (a) For...

  12. 40 CFR 60.706 - Reconstruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Volatile Organic Compound Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Reactor Processes § 60.706 Reconstruction. (a) For...

  13. 40 CFR Table 1b to Subpart Zzzz of... - Operating Limitations for Existing, New, and Reconstructed Spark Ignition, 4SRB Stationary RICE...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., New, and Reconstructed Spark Ignition, 4SRB Stationary RICE >500 HP Located at a Major Source of HAP... Limitations for Existing, New, and Reconstructed Spark Ignition, 4SRB Stationary RICE >500 HP Located at a... following operating emission limitations for existing, new and reconstructed 4SRB stationary RICE >500 HP...

  14. 40 CFR Table 1a to Subpart Zzzz of... - Emission Limitations for Existing, New, and Reconstructed Spark Ignition, 4SRB Stationary RICE...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., and Reconstructed Spark Ignition, 4SRB Stationary RICE >500 HP Located at a Major Source of HAP... Limitations for Existing, New, and Reconstructed Spark Ignition, 4SRB Stationary RICE >500 HP Located at a... emission limitations for existing, new and reconstructed 4SRB stationary RICE at 100 percent load plus or...

  15. Ghost imaging with bucket detection and point detection

    NASA Astrophysics Data System (ADS)

    Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao

    2018-04-01

    We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.

  16. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  17. Penile reconstruction with bilateral superficial circumflex iliac artery perforator (SCIP) flaps.

    PubMed

    Koshima, Isao; Nanba, Yuzaburo; Nagai, Atsushi; Nakatsuka, Mikiya; Sato, Toshiki; Kuroda, Shigetosi

    2006-04-01

    The free radial forearm flap is a very common material for penile reconstruction. Its major problems are donor-site morbidity with large depressive scar after skin grafting, urethral fistula due to insufficiency of suture line for the urethra, and need for microvascular anastomosis. A new method using combined bilateral island SCIP flaps for the urethra and penis is developed for gender identity disorder (GID) patients. The advantages of this method are minimal donor-site morbidity with a concealed donor scar, and possible one-stage reconstruction for a longer urethra of 22 cm in length without insufficiency, even for GID female-to-male patients. A disadvantage is poor sensory recovery.

  18. Temporal evolution of the Green's function reconstruction in the seismic coda

    NASA Astrophysics Data System (ADS)

    Clerc, V.; Roux, P.; Campillo, M.

    2013-12-01

    In presence of multiple scattering, the wavefield evolves towards an equipartitioned state, equivalent to ambient noise. CAMPILLO and PAUL (2003) reconstructed the surface wave part of the Green's function between three pairs of stations in Mexico. The data indicate that the time asymmetry between causal and acausal part of the Green's function is less pronounced when the correlation is performed in the later windows of the coda. These results on the correlation of diffuse waves provide another perspective on the reconstruction of Green function which is independent of the source distribution and which suggests that if the time of observation is long enough, a single source could be sufficient. The paper by ROUX et al. (2005) provides a theoretical frame for the reconstruction of the Green's function in a homogeneous middle. In a multiple scattering medium with a single source, scatterers behave as secondary sources according to the Huygens principle. Coda waves are relevant to multiple scattering, a regime which can be approximated by diffusion for long lapse times. We express the temporal evolution of the correlation function between two receivers as a function of the secondary sources. We are able to predict the effect of the persistence of the net flux of energy observed by CAMPILLO and PAUL (2003) in numerical simulations. This method is also effective in order to retrieve the scattering mean free path. We perform a partial reconstruction of the Green's function in a strongly scattering medium in numerical simulations. The prediction of the flux asymmetry allows defining the parts of the coda providing the same information as ambient noise cross correlation.

  19. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  20. Defining an essence of structure determining residue contacts in proteins.

    PubMed

    Sathyapriya, R; Duarte, Jose M; Stehr, Henning; Filippis, Ioannis; Lappe, Michael

    2009-12-01

    The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this "structural essence" has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts-such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed "cone-peeling" that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 A Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This "structural essence" opens new avenues in the fields of structure prediction, empirical potentials and docking.

  1. Defining an Essence of Structure Determining Residue Contacts in Proteins

    PubMed Central

    Sathyapriya, R.; Duarte, Jose M.; Stehr, Henning; Filippis, Ioannis; Lappe, Michael

    2009-01-01

    The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this “structural essence” has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts—such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed “cone-peeling” that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 Å Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This “structural essence” opens new avenues in the fields of structure prediction, empirical potentials and docking. PMID:19997489

  2. Horizontal ridge reconstruction of the anterior maxilla using customized allogeneic bone blocks with a minimally invasive technique - a case series.

    PubMed

    Venet, Laurent; Perriat, Michel; Mangano, Francesco Guido; Fortin, Thomas

    2017-12-08

    Different surgical procedures have been proposed to achieve horizontal ridge reconstruction of the anterior maxilla; all these procedures, however, require bone replacement materials to be adapted to the bone defect at the time of implantation, resulting in complex and time-consuming procedures. The purpose of this study was to describe how to use a 3D printed hardcopy model of the maxilla to prepare customized milled bone blocks, to be adapted on the bone defect areas using a minimally invasive subperiosteal tunneling technique. Cone beam computed tomography (CBCT) images of the atrophic maxilla of six patients were acquired and modified into 3D reconstruction models. Data were transferred to a 3D printer and solid models were fabricated using autoclavable nylon polyamide. Before the surgery, freeze-dried cortico-cancellous blocks were manually milled and adapted on the 3D printed hardcopy models of the maxillary bone, in order to obtain customized allogeneic bone blocks. In total, eleven onlay customized allogeneic bone grafts were prepared and implanted in 6 patients, using a minimally invasive subperiosteal tunneling technique. The scaffolds closely matched the shape of the defects: this reduced the operation time and contributed to good healing. The patients did not demonstrate adverse events such as inflammation, dehiscence or flap re-opening during the recovery period; however, one patient experienced scaffold resorption, which was likely caused by uncontrolled motion of the removable provisional prosthesis. Following a 6 month healing period, CBCT was used to assess graft integration, which was followed by insertion of implants into the augmented areas. Prosthetic restorations were placed 4 months later. These observations suggest that customized bone allografts can be successfully used for horizontal ridge reconstruction of the anterior maxilla: patients demonstrated reduced morbidity and decreased total surgery time. Further studies on a larger sample of patients, with histologic evaluation and longer follow-up are needed to confirm the present observations.

  3. Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo

    2015-11-01

    Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests

  4. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  5. Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision

    PubMed Central

    Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao

    2015-01-01

    In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863

  6. Reconstruction of a nonminimal coupling theory with scale-invariant power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, Taotao, E-mail: qiutt@ntu.edu.tw

    2012-06-01

    A nonminimal coupling single scalar field theory, when transformed from Jordan frame to Einstein frame, can act like a minimal coupling one. Making use of this property, we investigate how a nonminimal coupling theory with scale-invariant power spectrum could be reconstructed from its minimal coupling counterpart, which can be applied in the early universe. Thanks to the coupling to gravity, the equation of state of our universe for a scale-invariant power spectrum can be relaxed, and the relation between the parameters in the action can be obtained. This approach also provides a means to address the Big-Bang puzzles and anisotropymore » problem in the nonminimal coupling model within Jordan frame. Due to the equivalence between the two frames, one may be able to find models that are free of the horizon, flatness, singularity as well as anisotropy problems.« less

  7. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  8. 40 CFR 63.1345 - Emissions limits for affected sources other than kilns; in-line kiln/raw mills; clinker coolers...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... other than kilns; in-line kiln/raw mills; clinker coolers; new and reconstructed raw material dryers; and raw and finish mills, and open clinker piles. 63.1345 Section 63.1345 Protection of Environment... for affected sources other than kilns; in-line kiln/raw mills; clinker coolers; new and reconstructed...

  9. BoneSource hydroxyapatite cement: a novel biomaterial for craniofacial skeletal tissue engineering and reconstruction.

    PubMed

    Friedman, C D; Costantino, P D; Takagi, S; Chow, L C

    1998-01-01

    BoneSource-hydroxyapatite cement is a new self-setting calcium phosphate cement biomaterial. Its unique and innovative physical chemistry coupled with enhanced biocompatibility make it useful for craniofacial skeletal reconstruction. The general properties and clinical use guidelines are reviewed. The biomaterial and surgical applications offer insight into improved outcomes and potential new uses for hydroxyapatite cement systems.

  10. Kernel temporal enhancement approach for LORETA source reconstruction using EEG data.

    PubMed

    Torres-Valencia, Cristian A; Santamaria, M Claudia Joana; Alvarez, Mauricio A

    2016-08-01

    Reconstruction of brain sources from magnetoencephalography and electroencephalography (M/EEG) data is a well known problem in the neuroengineering field. A inverse problem should be solved and several methods have been proposed. Low Resolution Electromagnetic Tomography (LORETA) and the different variations proposed as standardized LORETA (sLORETA) and the standardized weighted LORETA (swLORETA) have solved the inverse problem following a non-parametric approach, that is by setting dipoles in the whole brain domain in order to estimate the dipole positions from the M/EEG data and assuming some spatial priors. Errors in the reconstruction of sources are presented due the low spatial resolution of the LORETA framework and the influence of noise in the observable data. In this work a kernel temporal enhancement (kTE) is proposed in order to build a preprocessing stage of the data that allows in combination with the swLORETA method a improvement in the source reconstruction. The results are quantified in terms of three dipole error localization metrics and the strategy of swLORETA + kTE obtained the best results across different signal to noise ratio (SNR) in random dipoles simulation from synthetic EEG data.

  11. Allogeneic versus autologous derived cell sources for use in engineered bone-ligament-bone grafts in sheep anterior cruciate ligament repair.

    PubMed

    Mahalingam, Vasudevan D; Behbahani-Nejad, Nilofar; Horine, Storm V; Olsen, Tyler J; Smietana, Michael J; Wojtys, Edward M; Wellik, Deneen M; Arruda, Ellen M; Larkin, Lisa M

    2015-03-01

    The use of autografts versus allografts for anterior cruciate ligament (ACL) reconstruction is controversial. The current popular options for ACL reconstruction are patellar tendon or hamstring autografts, yet advances in allograft technologies have made allogeneic grafts a favorable option for repair tissue. Despite this, the mismatched biomechanical properties and risk of osteoarthritis resulting from the current graft technologies have prompted the investigation of new tissue sources for ACL reconstruction. Previous work by our lab has demonstrated that tissue-engineered bone-ligament-bone (BLB) constructs generated from an allogeneic cell source develop structural and functional properties similar to those of native ACL and vascular and neural structures that exceed those of autologous patellar tendon grafts. In this study, we investigated the effectiveness of our tissue-engineered ligament constructs fabricated from autologous versus allogeneic cell sources. Our preliminary results demonstrate that 6 months postimplantation, our tissue-engineered auto- and allogeneic BLB grafts show similar histological and mechanical outcomes indicating that the autologous grafts are a viable option for ACL reconstruction. These data indicate that our tissue-engineered autologous ligament graft could be used in clinical situations where immune rejection and disease transmission may preclude allograft use.

  12. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.

    2011-09-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  13. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  14. Adaptive tight frame based medical image reconstruction: a proof-of-concept study for computed tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao

    2013-12-01

    A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.

  15. Combined flaps based on the superficial temporal vascular system for reconstruction of facial defects.

    PubMed

    Zhou, Renpeng; Wang, Chen; Qian, Yunliang; Wang, Danru

    2015-09-01

    Facial defects are multicomponent deficiencies rather than simple soft-tissue defects. Based on different branches of the superficial temporal vascular system, various tissue components can be obtained to reconstruct facial defects individually. From January 2004 to December 2013, 31 patients underwent reconstruction of facial defects with composite flaps based on the superficial temporal vascular system. Twenty cases of nasal defects were repaired with skin and cartilage components, six cases of facial defects were treated with double island flaps of the skin and fascia, three patients underwent eyebrow and lower eyelid reconstruction with hairy and hairless flaps simultaneously, and two patients underwent soft-tissue repair with auricular combined flaps and cranial bone grafts. All flaps survived completely. Donor-site morbidity is minimal, closed primarily. Donor areas healed with acceptable cosmetic results. The final outcome was satisfactory. Combined flaps based on the superficial temporal vascular system are a useful and versatile option in facial soft-tissue reconstruction. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  16. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  17. The Economics of Prepectoral Breast Reconstruction.

    PubMed

    Glasberg, Scot Bradley

    2017-12-01

    The world of breast reconstruction over the last several years has seen a dramatic shift in focus to discussion and the application of placing tissue expanders and implants back into the prepectoral space. Although this technique failed during the early advent of breast reconstruction, newer technologies such as advances in fat grafting, improved acellular dermal matrices, better methods of assessing breast flap viability, and enhanced implants appear to have set the stage for the resurgence and positive early results seen with this technique. The main benefits of a switch to prepectoral breast reconstruction clinically appears to be less associated pain, lower incidence of animation deformities, and its associated symptoms as well as presumably better aesthetics. Early data suggest that the results are extremely promising and early adopters have attempted to define the ideal patients for prepectoral breast reconstruction. As with any new operative procedure, an assessment of finances and costs are crucial to its successful implementation. Although current data are minimal, this article attempts to build the fundamentals of an economic model that exhibits and displays potential savings through the use of prepectoral breast reconstruction.

  18. Sampling limits for electron tomography with sparsity-exploiting reconstructions.

    PubMed

    Jiang, Yi; Padgett, Elliot; Hovden, Robert; Muller, David A

    2018-03-01

    Electron tomography (ET) has become a standard technique for 3D characterization of materials at the nano-scale. Traditional reconstruction algorithms such as weighted back projection suffer from disruptive artifacts with insufficient projections. Popularized by compressed sensing, sparsity-exploiting algorithms have been applied to experimental ET data and show promise for improving reconstruction quality or reducing the total beam dose applied to a specimen. Nevertheless, theoretical bounds for these methods have been less explored in the context of ET applications. Here, we perform numerical simulations to investigate performance of ℓ 1 -norm and total-variation (TV) minimization under various imaging conditions. From 36,100 different simulated structures, our results show specimens with more complex structures generally require more projections for exact reconstruction. However, once sufficient data is acquired, dividing the beam dose over more projections provides no improvements-analogous to the traditional dose-fraction theorem. Moreover, a limited tilt range of ±75° or less can result in distorting artifacts in sparsity-exploiting reconstructions. The influence of optimization parameters on reconstructions is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Method and apparatus for sensor fusion

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)

    1991-01-01

    Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.

  20. Evolving Techniques for Mitral Valve Reconstruction

    PubMed Central

    Galloway, Aubrey C.; Grossi, Eugene A.; Bizekis, Costas S.; Ribakove, Greg; Ursomanno, Patricia; Delianides, Julie; Baumann, F. Gregory; Spencer, Frank C.; Colvin, Stephen B.

    2002-01-01

    Objective To analyze the effectiveness of new techniques of mitral valve reconstruction (MVR) that have evolved over the last decade, such as aggressive anterior leaflet repair and minimally invasive surgery using an endoaortic balloon occluder. Summary Background Data MVR via conventional sternotomy has been an established treatment for mitral insufficiency for over 20 years, primarily for the treatment of patients with posterior leaflet prolapse. Methods Between June 1980 and June 2001, 1,195 consecutive patients had MVR with ring annuloplasty. Conventional sternotomy was used in 843 patients, minimally invasive surgery in 352 (since June 1996). Anterior leaflet repair was performed in 374 patients, with increasing use over the last 10 years. Follow-up was 100% complete (mean 4.6 years, range 0.5–20.5). Results Hospital mortality was 4.7% overall and 1.4% for isolated MVR (1.1% for minimally invasive surgery vs. 1.6% for conventional sternotomy;P = .4). Multivariate analysis showed the factors predictive of increased operative risk to be age, NYHA functional class, concomitant procedures, and previous cardiac surgery. The 5-year results for freedom from cardiac death, reoperation, and valve-related complications among the 782 patients with degenerative etiology are, respectively, as follows (P > .05 for all end points): for anterior leaflet repair, 93%, 94%, 90%; for no anterior leaflet repair, 91%, 92%, 91%; for minimally invasive surgery, 97%, 89%, 93%; and for conventional sternotomy, 93%, 94%, 90%. Conclusions These findings indicate that late results of MVR after minimally invasive surgery and after anterior leaflet repair are equivalent to those achievable with conventional sternotomy and posterior leaflet repair. These options significantly expand the range of patients suitable for mitral valve repair surgery and give further evidence to support wider use of minimally invasive techniques. PMID:12192315

  1. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  2. A model for filtered backprojection reconstruction artifacts due to time-varying attenuation values in perfusion C-arm CT.

    PubMed

    Fieselmann, Andreas; Dennerlein, Frank; Deuerling-Zheng, Yu; Boese, Jan; Fahrig, Rebecca; Hornegger, Joachim

    2011-06-21

    Filtered backprojection is the basis for many CT reconstruction tasks. It assumes constant attenuation values of the object during the acquisition of the projection data. Reconstruction artifacts can arise if this assumption is violated. For example, contrast flow in perfusion imaging with C-arm CT systems, which have acquisition times of several seconds per C-arm rotation, can cause this violation. In this paper, we derived and validated a novel spatio-temporal model to describe these kinds of artifacts. The model separates the temporal dynamics due to contrast flow from the scan and reconstruction parameters. We introduced derivative-weighted point spread functions to describe the spatial spread of the artifacts. The model allows prediction of reconstruction artifacts for given temporal dynamics of the attenuation values. Furthermore, it can be used to systematically investigate the influence of different reconstruction parameters on the artifacts. We have shown that with optimized redundancy weighting function parameters the spatial spread of the artifacts around a typical arterial vessel can be reduced by about 70%. Finally, an inversion of our model could be used as the basis for novel dynamic reconstruction algorithms that further minimize these artifacts.

  3. Computer-aided design and rapid prototyping-assisted contouring of costal cartilage graft for facial reconstructive surgery.

    PubMed

    Lee, Shu Jin; Lee, Heow Pueh; Tse, Kwong Ming; Cheong, Ee Cherk; Lim, Siak Piang

    2012-06-01

    Complex 3-D defects of the facial skeleton are difficult to reconstruct with freehand carving of autogenous bone grafts. Onlay bone grafts are hard to carve and are associated with imprecise graft-bone interface contact and bony resorption. Autologous cartilage is well established in ear reconstruction as it is easy to carve and is associated with minimal resorption. In the present study, we aimed to reconstruct the hypoplastic orbitozygomatic region in a patient with left hemifacial microsomia using computer-aided design and rapid prototyping to facilitate costal cartilage carving and grafting. A three-step process of (1) 3-D reconstruction of the computed tomographic image, (2) mirroring the facial skeleton, and (3) modeling and rapid prototyping of the left orbitozygomaticomalar region and reconstruction template was performed. The template aided in donor site selection and extracorporeal contouring of the rib cartilage graft to allow for an accurate fit of the graft to the bony model prior to final fixation in the patient. We are able to refine the existing computer-aided design and rapid prototyping methods to allow for extracorporeal contouring of grafts and present rib cartilage as a good alternative to bone for autologous reconstruction.

  4. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  5. A-Frame free Vascularized Fibular Graft and Femoral Lengthening for Osteosarcoma Pediatric Patients.

    PubMed

    Cashin, Megan; Coombs, Christopher; Torode, Ian

    2018-02-01

    Pediatric limb reconstruction after resection of a malignant tumor presents specific challenges. Multiple surgical techniques have been used to treat these patients. This paper describes a staged surgical technique for the reconstruction of large distal femoral defects due to tumor resection in skeletally immature patients. Three pediatric patients with osteosarcoma of the distal femur underwent staged reconstruction. Neoadjuvant chemotherapy was followed by en bloc tumor resection and immediate reconstruction of the distal femoral defect with a vascularized free fibular autograft utilizing a unique A-frame construct combined with intramedullary nail fixation. The second stage was a planned gradual lengthening of the healed construct, over a custom-made magnetically driven expandable intramedullary nail. All patients achieved bony union and satisfactory length with minimal complications. The patients all returned to full, unlimited physical activities. The early results confirm that the described technique is a safe and reliable procedure for the reconstruction of large femoral defects in pediatric patients with osteosarcoma. Level IV-therapeutic.

  6. Minimizing donor-site morbidity following bilateral pedicled TRAM breast reconstruction with the double mesh fold over technique.

    PubMed

    Bharti, Gaurav; Groves, Leslie; Sanger, Claire; Thompson, James; David, Lisa; Marks, Malcolm

    2013-05-01

    Transverse rectus abdominus muscle flaps (TRAM) can result in significant abdominal wall donor-site morbidity. We present our experience with bilateral pedicle TRAM breast reconstruction using a double-layered polypropylene mesh fold over technique to repair the rectus fascia. A retrospective study was performed that included patients with bilateral pedicle TRAM breast reconstruction and abdominal reconstruction using a double-layered polypropylene mesh fold over technique. Thirty-five patients met the study criteria with a mean age of 49 years old and mean follow-up of 7.4 years. There were no instances of abdominal hernia and only 2 cases (5.7%) of abdominal bulge. Other abdominal complications included partial umbilical necrosis (14.3%), seroma (11.4%), partial wound dehiscence (8.6%), abdominal weakness (5.7%), abdominal laxity (2.9%), and hematoma (2.9%). The TRAM flap is a reliable option for bilateral autologous breast reconstruction. Using the double mesh repair of the abdominal wall can reduce instances of an abdominal bulge and hernia.

  7. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX)

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-01-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710

  8. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  9. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    NASA Astrophysics Data System (ADS)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong

    2016-12-01

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  10. Temporal and spectral imaging with micro-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Samuel M.; Johnson, G. Allan; Badea, Cristian T.

    2012-08-15

    Purpose: Micro-CT is widely used for small animal imaging in preclinical studies of cardiopulmonary disease, but further development is needed to improve spatial resolution, temporal resolution, and material contrast. We present a technique for visualizing the changing distribution of iodine in the cardiac cycle with dual source micro-CT. Methods: The approach entails a retrospectively gated dual energy scan with optimized filters and voltages, and a series of computational operations to reconstruct the data. Projection interpolation and five-dimensional bilateral filtration (three spatial dimensions + time + energy) are used to reduce noise and artifacts associated with retrospective gating. We reconstruct separatemore » volumes corresponding to different cardiac phases and apply a linear transformation to decompose these volumes into components representing concentrations of water and iodine. Since the resulting material images are still compromised by noise, we improve their quality in an iterative process that minimizes the discrepancy between the original acquired projections and the projections predicted by the reconstructed volumes. The values in the voxels of each of the reconstructed volumes represent the coefficients of linear combinations of basis functions over time and energy. We have implemented the reconstruction algorithm on a graphics processing unit (GPU) with CUDA. We tested the utility of the technique in simulations and applied the technique in an in vivo scan of a C57BL/6 mouse injected with blood pool contrast agent at a dose of 0.01 ml/g body weight. Postreconstruction, at each cardiac phase in the iodine images, we segmented the left ventricle and computed its volume. Using the maximum and minimum volumes in the left ventricle, we calculated the stroke volume, the ejection fraction, and the cardiac output. Results: Our proposed method produces five-dimensional volumetric images that distinguish different materials at different points in time, and can be used to segment regions containing iodinated blood and compute measures of cardiac function. Conclusions: We believe this combined spectral and temporal imaging technique will be useful for future studies of cardiopulmonary disease in small animals.« less

  11. Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography.

    PubMed

    Yalavarthy, Phaneendra K; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2007-06-01

    Diffuse optical tomography (DOT) involves estimation of tissue optical properties using noninvasive boundary measurements. The image reconstruction procedure is a nonlinear, ill-posed, and ill-determined problem, so overcoming these difficulties requires regularization of the solution. While the methods developed for solving the DOT image reconstruction procedure have a long history, there is less direct evidence on the optimal regularization methods, or exploring a common theoretical framework for techniques which uses least-squares (LS) minimization. A generalized least-squares (GLS) method is discussed here, which takes into account the variances and covariances among the individual data points and optical properties in the image into a structured weight matrix. It is shown that most of the least-squares techniques applied in DOT can be considered as special cases of this more generalized LS approach. The performance of three minimization techniques using the same implementation scheme is compared using test problems with increasing noise level and increasing complexity within the imaging field. Techniques that use spatial-prior information as constraints can be also incorporated into the GLS formalism. It is also illustrated that inclusion of spatial priors reduces the image error by at least a factor of 2. The improvement of GLS minimization is even more apparent when the noise level in the data is high (as high as 10%), indicating that the benefits of this approach are important for reconstruction of data in a routine setting where the data variance can be known based upon the signal to noise properties of the instruments.

  12. Spatio-temporal Reconstruction of Neural Sources Using Indirect Dominant Mode Rejection.

    PubMed

    Jafadideh, Alireza Talesh; Asl, Babak Mohammadzadeh

    2018-04-27

    Adaptive minimum variance based beamformers (MVB) have been successfully applied to magnetoencephalogram (MEG) and electroencephalogram (EEG) data to localize brain activities. However, the performance of these beamformers falls down in situations where correlated or interference sources exist. To overcome this problem, we propose indirect dominant mode rejection (iDMR) beamformer application in brain source localization. This method by modifying measurement covariance matrix makes MVB applicable in source localization in the presence of correlated and interference sources. Numerical results on both EEG and MEG data demonstrate that presented approach accurately reconstructs time courses of active sources and localizes those sources with high spatial resolution. In addition, the results of real AEF data show the good performance of iDMR in empirical situations. Hence, iDMR can be reliably used for brain source localization especially when there are correlated and interference sources.

  13. Mars Science Laboratory Entry, Descent, and Landing Trajectory and Atmosphere Reconstruction

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberer, Mark; Shidner, Jeremy D.

    2013-01-01

    On August 5th 2012, The Mars Science Laboratory entry vehicle successfully entered Mars atmosphere and landed the Curiosity rover on its surface. A Kalman filter approach has been implemented to reconstruct the entry, descent, and landing trajectory based on all available data. The data sources considered in the Kalman filtering approach include the inertial measurement unit accelerations and angular rates, the terrain descent sensor, the measured landing site, orbit determination solutions for the initial conditions, and a new set of instrumentation for planetary entry reconstruction consisting of forebody pressure sensors, known as the Mars Entry Atmospheric Data System. These pressure measurements are unique for planetary entry, descent, and landing reconstruction as they enable a reconstruction of the freestream atmospheric conditions without any prior assumptions being made on the vehicle aerodynamics. Moreover, the processing of these pressure measurements in the Kalman filter approach enables the identification of atmospheric winds, which has not been accomplished in past planetary entry reconstructions. This separation of atmosphere and aerodynamics allows for aerodynamic model reconciliation and uncertainty quantification, which directly impacts future missions. This paper describes the mathematical formulation of the Kalman filtering approach, a summary of data sources and preprocessing activities, and results of the reconstruction.

  14. Equilibrium reconstruction with 3D eddy currents in the Lithium Tokamak eXperiment

    DOE PAGES

    Hansen, C.; Boyle, D. P.; Schmitt, J. C.; ...

    2017-04-18

    Axisymmetric free-boundary equilibrium reconstructions of tokamak plasmas in the Lithium Tokamak eXperiment (LTX) are performed using the PSI-Tri equilibrium code. Reconstructions in LTX are complicated by the presence of long-lived non-axisymmetric eddy currents generated by a vacuum vessel and first wall structures. To account for this effect, reconstructions are performed with additional toroidal current sources in these conducting regions. The eddy current sources are fixed in their poloidal distributions, but their magnitude is adjusted as part of the full reconstruction. Eddy distributions are computed by toroidally averaging currents, generated by coupling to vacuum field coils, from a simplified 3D filamentmore » model of important conducting structures. The full 3D eddy current fields are also used to enable the inclusion of local magnetic field measurements, which have strong 3D eddy current pick-up, as reconstruction constraints. Using this method, equilibrium reconstruction yields good agreement with all available diagnostic signals. Here, an accompanying field perturbation produced by 3D eddy currents on the plasma surface with a primarily n = 2, m = 1 character is also predicted for these equilibria.« less

  15. The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK)

    NASA Astrophysics Data System (ADS)

    Rit, S.; Vila Oliva, M.; Brousmiche, S.; Labarbe, R.; Sarrut, D.; Sharp, G. C.

    2014-03-01

    We propose the Reconstruction Toolkit (RTK, http://www.openrtk.org), an open-source toolkit for fast cone-beam CT reconstruction, based on the Insight Toolkit (ITK) and using GPU code extracted from Plastimatch. RTK is developed by an open consortium (see affiliations) under the non-contaminating Apache 2.0 license. The quality of the platform is daily checked with regression tests in partnership with Kitware, the company supporting ITK. Several features are already available: Elekta, Varian and IBA inputs, multi-threaded Feldkamp-David-Kress reconstruction on CPU and GPU, Parker short scan weighting, multi-threaded CPU and GPU forward projectors, etc. Each feature is either accessible through command line tools or C++ classes that can be included in independent software. A MIDAS community has been opened to share CatPhan datasets of several vendors (Elekta, Varian and IBA). RTK will be used in the upcoming cone-beam CT scanner developed by IBA for proton therapy rooms. Many features are under development: new input format support, iterative reconstruction, hybrid Monte Carlo / deterministic CBCT simulation, etc. RTK has been built to freely share tomographic reconstruction developments between researchers and is open for new contributions.

  16. Increasing signal-to-noise ratio of swept-source optical coherence tomography by oversampling in k-space

    NASA Astrophysics Data System (ADS)

    Nagib, Karim; Mezgebo, Biniyam; Thakur, Rahul; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-03-01

    Optical coherence tomography systems suffer from noise that could reduce ability to interpret reconstructed images correctly. We describe a method to increase the signal-to-noise ratio of swept-source optical coherence tomography (SSOCT) using oversampling in k-space. Due to this oversampling, information redundancy would be introduced in the measured interferogram that could be used to reduce white noise in the reconstructed A-scan. We applied our novel scaled nonuniform discrete Fourier transform to oversampled SS-OCT interferograms to reconstruct images of a salamander egg. The peak-signal-to-noise (PSNR) between the reconstructed images using interferograms sampled at 250MS/s andz50MS/s demonstrate that this oversampling increased the signal-to-noise ratio by 25.22 dB.

  17. SPECT reconstruction with nonuniform attenuation from highly under-sampled projection data

    NASA Astrophysics Data System (ADS)

    Li, Cuifen; Wen, Junhai; Zhang, Kangping; Shi, Donghao; Dong, Haixiang; Li, Wenxiao; Liang, Zhengrong

    2012-03-01

    Single photon emission computed tomography (SPECT) is an important nuclear medicine imaging technique and has been using in clinical diagnoses. The SPECT image can reflect not only organizational structure but also functional activities of human body, therefore diseases can be found much earlier. In SPECT, the reconstruction is based on the measurement of gamma photons emitted by the radiotracer. The number of gamma photons detected is proportional to the dose of radiopharmaceutical, but the dose is limited because of patient safety. There is an upper limit in the number of gamma photons that can be detected per unit time, so it takes a long time to acquire SPECT projection data. Sometimes we just can obtain highly under-sampled projection data because of the limit of the scanning time or imaging hardware. How to reconstruct an image using highly under-sampled projection data is an interesting problem. One method is to minimize the total variation (TV) of the reconstructed image during the iterative reconstruction. In this work, we developed an OSEM-TV SPECT reconstruction algorithm, which could reconstruct the image from highly under-sampled projection data with non-uniform attenuation. Simulation results demonstrate that the OSEM-TV algorithm performs well in SPECT reconstruction with non-uniform attenuation.

  18. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    PubMed

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  19. An augmented Lagrangian trust region method for inclusion boundary reconstruction using ultrasound/electrical dual-modality tomography

    NASA Astrophysics Data System (ADS)

    Liang, Guanghui; Ren, Shangjie; Dong, Feng

    2018-07-01

    The ultrasound/electrical dual-modality tomography utilizes the complementarity of ultrasound reflection tomography (URT) and electrical impedance tomography (EIT) to improve the speed and accuracy of image reconstruction. Due to its advantages of no-invasive, no-radiation and low-cost, ultrasound/electrical dual-modality tomography has attracted much attention in the field of dual-modality imaging and has many potential applications in industrial and biomedical imaging. However, the data fusion of URT and EIT is difficult due to their different theoretical foundations and measurement principles. The most commonly used data fusion strategy in ultrasound/electrical dual-modality tomography is incorporating the structured information extracted from the URT into the EIT image reconstruction process through a pixel-based constraint. Due to the inherent non-linearity and ill-posedness of EIT, the reconstructed images from the strategy suffer from the low resolution, especially at the boundary of the observed inclusions. To improve this condition, an augmented Lagrangian trust region method is proposed to directly reconstruct the shapes of the inclusions from the ultrasound/electrical dual-modality measurements. In the proposed method, the shape of the target inclusion is parameterized by a radial shape model whose coefficients are used as the shape parameters. Then, the dual-modality shape inversion problem is formulated by an energy minimization problem in which the energy function derived from EIT is constrained by an ultrasound measurements model through an equality constraint equation. Finally, the optimal shape parameters associated with the optimal inclusion shape guesses are determined by minimizing the constrained cost function using the augmented Lagrangian trust region method. To evaluate the proposed method, numerical tests are carried out. Compared with single modality EIT, the proposed dual-modality inclusion boundary reconstruction method has a higher accuracy and is more robust to the measurement noise.

  20. Optimal secondary source position in exterior spherical acoustical holophony

    NASA Astrophysics Data System (ADS)

    Pasqual, A. M.; Martin, V.

    2012-02-01

    Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.

  1. SU-E-J-153: Reconstructing 4D Cone Beam CT Images for Clinical QA of Lung SABR Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaudry, J; Bergman, A; British Columbia Cancer Agency, Vancouver, BC

    Purpose: To verify that the planned Primary Target Volume (PTV) and Internal Gross Tumor Volume (IGTV) fully enclose a moving lung tumor volume as visualized on a pre-SABR treatment verification 4D Cone Beam CT. Methods: Daily 3DCBCT image sets were acquired immediately prior to treatment for 10 SABR lung patients using the on-board imaging system integrated into a Varian TrueBeam (v1.6: no 4DCBCT module available). Respiratory information was acquired during the scan using the Varian RPM system. The CBCT projections were sorted into 8 bins offline, both by breathing phase and amplitude, using in-house software. An iterative algorithm based onmore » total variation minimization, implemented in the open source reconstruction toolkit (RTK), was used to reconstruct the binned projections into 4DCBCT images. The relative tumor motion was quantified by tracking the centroid of the tumor volume from each 4DCBCT image. Following CT-CBCT registration, the planning CT volumes were compared to the location of the CBCT tumor volume as it moves along its breathing trajectory. An overlap metric quantified the ability of the planned PTV and IGTV to contain the tumor volume at treatment. Results: The 4DCBCT reconstructed images visibly show the tumor motion. The mean overlap between the planned PTV (IGTV) and the 4DCBCT tumor volumes was 100% (94%), with an uncertainty of 5% from the 4DCBCT tumor volume contours. Examination of the tumor motion and overlap metric verify that the IGTV drawn at the planning stage is a good representation of the tumor location at treatment. Conclusion: It is difficult to compare GTV volumes from a 4DCBCT and a planning CT due to image quality differences. However, it was possible to conclude the GTV remained within the PTV 100% of the time thus giving the treatment staff confidence that SABR lung treatements are being delivered accurately.« less

  2. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome.

    PubMed

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the "connectome". Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.

  3. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome

    PubMed Central

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the “connectome”. Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/. PMID:26379232

  4. Reconstructing El Niño Southern Oscillation using data from ships' logbooks, 1815-1854. Part I: methodology and evaluation

    NASA Astrophysics Data System (ADS)

    Barrett, Hannah G.; Jones, Julie M.; Bigg, Grant R.

    2018-02-01

    The meteorological information found within ships' logbooks is a unique and fascinating source of data for historical climatology. This study uses wind observations from logbooks covering the period 1815 to 1854 to reconstruct an index of El Niño Southern Oscillation (ENSO) for boreal winter (DJF). Statistically-based reconstructions of the Southern Oscillation Index (SOI) are obtained using two methods: principal component regression (PCR) and composite-plus-scale (CPS). Calibration and validation are carried out over the modern period 1979-2014, assessing the relationship between re-gridded seasonal ERA-Interim reanalysis wind data and the instrumental SOI. The reconstruction skill of both the PCR and CPS methods is found to be high with reduction of error skill scores of 0.80 and 0.75, respectively. The relationships derived during the fitting period are then applied to the logbook wind data to reconstruct the historical SOI. We develop a new method to assess the sensitivity of the reconstructions to using a limited number of observations per season and find that the CPS method performs better than PCR with a limited number of observations. A difference in the distribution of wind force terms used by British and Dutch ships is found, and its impact on the reconstruction assessed. The logbook reconstructions agree well with a previous SOI reconstructed from Jakarta rain day counts, 1830-1850, adding robustness to our reconstructions. Comparisons to additional documentary and proxy data sources are provided in a companion paper.

  5. Single image super-resolution based on compressive sensing and improved TV minimization sparse recovery

    NASA Astrophysics Data System (ADS)

    Vishnukumar, S.; Wilscy, M.

    2017-12-01

    In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.

  6. Clitoral Epidermal Inclusion Cyst Resection With Intraoperative Sensory Nerve Mapping Technique.

    PubMed

    Wu, Cindy; Damitz, Lynn; Karrat, Kimberly M; Mintz, Alice; Zolnoun, Denniz

    2016-01-01

    Despite the ever increasing popularity of labial and clitoral surgeries, the best practices and long-term effects of reconstructive procedures in these regions remain unknown. This is particularly noteworthy because the presentation of nerve-related symptoms may be delayed up to a year. Despite the convention that these surgical procedures are low risk, little is known about the best practices that may reduce the postoperative complications as a result of these reconstructive surgeries. We describe a preoperative sensory mapping technique in the context of a symptomatic inclusion cyst in the clitoral region. This technique delineates anatomical and functional regions innervated by the dorsal clitoral nerve while minimizing the vascular watershed area in the midline. A prototypical case of a patient with a clitoral mass is discussed with clinical history and surgical approach. Prior to surgical excision, the dorsal clitoral nerve distribution was mapped in order to avoid a surgical incision in this sensual zone. In our practice, preoperative sensory mapping is a clinically useful planning tool that requires minimal instrumentation and no additional operating time. Sensory mapping allows identification of the functional zone innervated by the dorsal clitoral nerve, which can aid in minimizing damage to the area.

  7. Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ

    PubMed Central

    Müller, Marcel; Mönkemöller, Viola; Hennig, Simon; Hübner, Wolfgang; Huser, Thomas

    2016-01-01

    Super-resolved structured illumination microscopy (SR-SIM) is an important tool for fluorescence microscopy. SR-SIM microscopes perform multiple image acquisitions with varying illumination patterns, and reconstruct them to a super-resolved image. In its most frequent, linear implementation, SR-SIM doubles the spatial resolution. The reconstruction is performed numerically on the acquired wide-field image data, and thus relies on a software implementation of specific SR-SIM image reconstruction algorithms. We present fairSIM, an easy-to-use plugin that provides SR-SIM reconstructions for a wide range of SR-SIM platforms directly within ImageJ. For research groups developing their own implementations of super-resolution structured illumination microscopy, fairSIM takes away the hurdle of generating yet another implementation of the reconstruction algorithm. For users of commercial microscopes, it offers an additional, in-depth analysis option for their data independent of specific operating systems. As a modular, open-source solution, fairSIM can easily be adapted, automated and extended as the field of SR-SIM progresses. PMID:26996201

  8. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    PubMed

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  9. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    NASA Astrophysics Data System (ADS)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  10. Time-stretch microscopy based on time-wavelength sequence reconstruction from wideband incoherent source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chi, E-mail: chizheung@gmail.com; Xu, Yiqing; Wei, Xiaoming

    2014-07-28

    Time-stretch microscopy has emerged as an ultrafast optical imaging concept offering the unprecedented combination of the imaging speed and sensitivity. However, dedicated wideband and coherence optical pulse source with high shot-to-shot stability has been mandated for time-wavelength mapping—the enabling process for ultrahigh speed wavelength-encoded image retrieval. From the practical point of view, exploiting methods to relax the stringent requirements (e.g., temporal stability and coherence) for the source of time-stretch microscopy is thus of great value. In this paper, we demonstrated time-stretch microscopy by reconstructing the time-wavelength mapping sequence from a wideband incoherent source. Utilizing the time-lens focusing mechanism mediated bymore » a narrow-band pulse source, this approach allows generation of a wideband incoherent source, with the spectral efficiency enhanced by a factor of 18. As a proof-of-principle demonstration, time-stretch imaging with the scan rate as high as MHz and diffraction-limited resolution is achieved based on the wideband incoherent source. We note that the concept of time-wavelength sequence reconstruction from wideband incoherent source can also be generalized to any high-speed optical real-time measurements, where wavelength is acted as the information carrier.« less

  11. Evaluation of Tizian overlays by means of a swept source optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Marcauteanu, Corina; Sinescu, Cosmin; Negrutiu, Meda Lavinia; Stoica, Eniko Tunde; Topala, Florin; Duma, Virgil Florin; Bradu, Adrian; Podoleanu, Adrian Gh.

    2016-03-01

    The teeth affected by pathologic attrition can be restored by a minimally invasive approach, using Tizian overlays. In this study we prove the advantages of a fast swept source (SS) OCT system in the evaluation of Tizian overlays placed in an environment characterized by high occlusal forces. 12 maxillary first premolars were extracted and prepared for overlays. The Tizian overlays were subjected to 3000 alternating cycles of thermo-cycling (from -10°C to +50°C) and to mechanical occlusal overloads (at 800 N). A fast SS OCT system was used to evaluate the Tizian overlays before and after the mechanical and thermal straining. The SS (Axsun Technologies, Billerica, MA) has a central wavelength of 1060 nm, sweeping range of 106 nm (quoted at 10 dB) and a 100 kHz line rate. The depth resolution of the system, measured experimentally in air was 10 μm. The imaging system used for this study offers high spatial resolutions in both directions, transversal and longitudinal of around 10 μm, a high sensitivity, and it is also able to acquire entire tridimensional (3D)/volume reconstructions as fast as 2.5 s. Once the full dataset was acquired, rendered high resolutions en-face projections could be produced. Using them, the overlay (i.e., cement) abutment tooth interfaces were remarked both on B-scans/two-dimensional (2D) sections and in the 3D reconstructions. Using the system several open interfaces were possible to detect. The fast SS OCT system thus proves useful in the evaluation of zirconia reinforced composite overlays, placed in an environment characterized by high occlusal forces.

  12. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.

  13. Update on orbital reconstruction.

    PubMed

    Chen, Chien-Tzung; Chen, Yu-Ray

    2010-08-01

    Orbital trauma is common and frequently complicated by ocular injuries. The recent literature on orbital fracture is analyzed with emphasis on epidemiological data assessment, surgical timing, method of approach and reconstruction materials. Computed tomographic (CT) scan has become a routine evaluation tool for orbital trauma, and mobile CT can be applied intraoperatively if necessary. Concomitant serious ocular injury should be carefully evaluated preoperatively. Patients presenting with nonresolving oculocardiac reflex, 'white-eyed' blowout fracture, or diplopia with a positive forced duction test and CT evidence of orbital tissue entrapment require early surgical repair. Otherwise, enophthalmos can be corrected by late surgery with a similar outcome to early surgery. The use of an endoscope-assisted approach for orbital reconstruction continues to grow, offering an alternative method. Advances in alloplastic materials have improved surgical outcome and shortened operating time. In this review of modern orbital reconstruction, several controversial issues such as surgical indication, surgical timing, method of approach and choice of reconstruction material are discussed. Preoperative fine-cut CT image and thorough ophthalmologic examination are key elements to determine surgical indications. The choice of surgical approach and reconstruction materials much depends on the surgeon's experience and the reconstruction area. Prefabricated alloplastic implants together with image software and stereolithographic models are significant advances that help to more accurately reconstruct the traumatized orbit. The recent evolution of orbit reconstruction improves functional and aesthetic results and minimizes surgical complications.

  14. Iterative CT reconstruction using coordinate descent with ordered subsets of data

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.

    2016-04-01

    Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.

  15. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  16. Percutaneous treatment of calculi in reconstructed bladder.

    PubMed

    Paez, Edgar; Reay, Emma; Murthy, L N S; Pickard, Robert S; Thomas, David J

    2007-03-01

    To report our results with percutaneous removal of calculi from reconstructed bladders. Twelve patients with reconstructed bladders who underwent endoscopic cystolithotomy were identified from our departmental database, and retrospective review of case notes and imaging was performed. Access was gained via an ultrasound-guided new tract in 9 patients (75%). An old suprapubic tract site was used in two patients, and the Mitrofanoff stoma was the route of access in one patient. The procedure was successful, with stone clearance achieved in all 12 cases. No major complications were observed. At a median follow up of 24 months, stone recurrence was observed in 5 patients (42%), 4 of whom underwent repeat procedures. Follow-up showed no change in continence in the patient with a Mitroffanoff stoma. Percutaneous cystolithotomy is a safe and effective minimally invasive option for removal of stones in a reconstructed bladder. We recommend endoscopic removal as the treatment of choice in these patients.

  17. Ankle Arthroscopic Reconstruction of Lateral Ligaments (Ankle Anti-ROLL)

    PubMed Central

    Takao, Masato; Glazebrook, Mark; Stone, James; Guillo, Stéphane

    2015-01-01

    Ankle instability is a condition that often requires surgery to stabilize the ankle joint that will improve pain and function if nonoperative treatments fail. Ankle stabilization surgery may be performed as a repair in which the native existing anterior talofibular ligament or calcaneofibular ligament (or both) is imbricated or reattached. Alternatively, when native ankle ligaments are insufficient for repair, a reconstruction of the ligaments may be performed in which an autologous or allograft tendon is used to reconstruct the anterior talofibular ligament or calcaneofibular ligament (or both). Currently, ankle stabilization surgery is most commonly performed through an open incision, but arthroscopic ankle stabilization using repair techniques has been described and is being used more often. We present our technique for anatomic ankle arthroscopic reconstruction of the lateral ligaments (anti-ROLL) performed in an all–inside-out manner that is likely safe for patients and minimally invasive. PMID:26900560

  18. 40 CFR Table 2a to Subpart Zzzz of... - Emission Limitations for New and Reconstructed 2SLB and Compression Ignition Stationary RICE >500...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Reconstructed 2SLB and Compression Ignition Stationary RICE >500 HP and New and Reconstructed 4SLB Stationary RICE â¥250 HP Located at a Major Source of HAP Emissions 2a Table 2a to Subpart ZZZZ of Part 63... 2SLB and Compression Ignition Stationary RICE >500 HP and New and Reconstructed 4SLB Stationary RICE...

  19. 40 CFR Table 2a to Subpart Zzzz of... - Emission Limitations for New and Reconstructed 2SLB and Compression Ignition Stationary RICE >500...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Reconstructed 2SLB and Compression Ignition Stationary RICE >500 HP and New and Reconstructed 4SLB Stationary RICE â¥250 HP Located at a Major Source of HAP Emissions 2a Table 2a to Subpart ZZZZ of Part 63... 2SLB and Compression Ignition Stationary RICE >500 HP and New and Reconstructed 4SLB Stationary RICE...

  20. A Case Series of Rapid Prototyping and Intraoperative Imaging in Orbital Reconstruction

    PubMed Central

    Lim, Christopher G.T.; Campbell, Duncan I.; Cook, Nicholas; Erasmus, Jason

    2014-01-01

    In Christchurch Hospital, rapid prototyping (RP) and intraoperative imaging are the standard of care in orbital trauma and has been used since February 2013. RP allows the fabrication of an anatomical model to visualize complex anatomical structures which is dimensionally accurate and cost effective. This assists diagnosis, planning, and preoperative implant adaptation for orbital reconstruction. Intraoperative imaging involves a computed tomography scan during surgery to evaluate surgical implants and restored anatomy and allows the clinician to correct errors in implant positioning that may occur during the same procedure. This article aims to demonstrate the potential clinical and cost saving benefits when both these technologies are used in orbital reconstruction which minimize the need for revision surgery. PMID:26000080

  1. A case series of rapid prototyping and intraoperative imaging in orbital reconstruction.

    PubMed

    Lim, Christopher G T; Campbell, Duncan I; Cook, Nicholas; Erasmus, Jason

    2015-06-01

    In Christchurch Hospital, rapid prototyping (RP) and intraoperative imaging are the standard of care in orbital trauma and has been used since February 2013. RP allows the fabrication of an anatomical model to visualize complex anatomical structures which is dimensionally accurate and cost effective. This assists diagnosis, planning, and preoperative implant adaptation for orbital reconstruction. Intraoperative imaging involves a computed tomography scan during surgery to evaluate surgical implants and restored anatomy and allows the clinician to correct errors in implant positioning that may occur during the same procedure. This article aims to demonstrate the potential clinical and cost saving benefits when both these technologies are used in orbital reconstruction which minimize the need for revision surgery.

  2. Bound-preserving Legendre-WENO finite volume schemes using nonlinear mapping

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Pantano, Carlos

    2017-11-01

    We present a new method to enforce field bounds in high-order Legendre-WENO finite volume schemes. The strategy consists of reconstructing each field through an intermediate mapping, which by design satisfies realizability constraints. Determination of the coefficients of the polynomial reconstruction involves nonlinear equations that are solved using Newton's method. The selection between the original or mapped reconstruction is implemented dynamically to minimize computational cost. The method has also been generalized to fields that exhibit interdependencies, requiring multi-dimensional mappings. Further, the method does not depend on the existence of a numerical flux function. We will discuss details of the proposed scheme and show results for systems in conservation and non-conservation form. This work was funded by the NSF under Grant DMS 1318161.

  3. Intraoral technique for locking reconstruction plate fixation using an implant handpiece with adapted drills.

    PubMed

    Haas, Orion Luiz; Scolari, Neimar; Meirelles, Lucas da Silva; Becker, Otávio Emmel; Melo, Marcelo Fernandes Santos; Viegas, Vinícius Nery; de Oliveira, Rogério Belle

    2016-09-01

    Locking reconstruction plates are used in the treatment of jaw trauma and diseases if there is a need for surgical resection and to prevent pathologic fracture after tumor excision. Fixation is typically performed using an extraoral approach. This article describes a technique for the intraoral fixation of locking reconstruction plates that uses prototyping to model the plate before the procedure as well as an implant handpiece with adapted drills for bone drilling and the insertion of screws into relatively inaccessible areas. Intraoral fixation not only prevents nerve damage and facial scarring but also minimizes the plate's risk of extraoral exposure and reduces surgical morbidity. © 2016 Wiley Periodicals, Inc. Head Neck 38: 1436-1439, 2016. © 2016 Wiley Periodicals, Inc.

  4. Uncut Roux-en-Y reconstruction after totally laparoscopic distal gastrectomy with D2 lymph node dissection for early stage gastric cancer.

    PubMed

    Huang, Hua; Long, Ziwen; Xuan, Yi

    2016-01-01

    Roux Stasis Syndrome is a well-known complication after Roux-en-Y reconstruction. Uncut Roux-en-Y technique, would preserve unidirectional intestinal myoelectrical activity and diminish Roux Stasis Syndrome. A 61 years old woman with moderately differentiated adenocarcinoma of antrum who was diagnosed by gastroscopy and histological test, underwent totally laparoscopic distal gastrectomy (TLDG) with D2 lymph node dissection and uncut Roux-en-Y reconstruction (URYR). The length of operation was 190 min with bleeding of about 40 mL. The patient recovers well postoperation and discharged from hospital on the 7 th day. TLDG with intracorporeal uncut Roux-en-Y gastrojejunostomies using laparoscopic linear staplers was safe and feasible with minimal invasiveness.

  5. Use of Raman microscopy and band-target entropy minimization analysis to identify dyes in a commercial stamp. Implications for authentication and counterfeit detection.

    PubMed

    Widjaja, Effendi; Garland, Marc

    2008-02-01

    Raman microscopy was used in mapping mode to collect more than 1000 spectra in a 100 microm x 100 microm area from a commercial stamp. Band-target entropy minimization (BTEM) was then employed to unmix the mixture spectra in order to extract the pure component spectra of the samples. Three pure component spectral patterns with good signal-to-noise ratios were recovered, and their spatial distributions were determined. The three pure component spectral patterns were then identified as copper phthalocyanine blue, calcite-like material, and yellow organic dye material by comparison to known spectral libraries. The present investigation, consisting of (1) advanced curve resolution (blind-source separation) followed by (2) spectral data base matching, readily suggests extensions to authenticity and counterfeit studies of other types of commercial objects. The presence or absence of specific observable components form the basis for assessment. The present spectral analysis (BTEM) is applicable to highly overlapping spectral information. Since a priori information such as the number of components present and spectral libraries are not needed in BTEM, and since minor signals arising from trace components can be reconstructed, this analysis offers a robust approach to a wide variety of material problems involving authenticity and counterfeit issues.

  6. Exposure Reconstruction: A Framework of Advancing Exposure Assessment

    EPA Science Inventory

    The U.S. Environmental Protection Agency’s (EPA) primary goal for environmental protection is to eliminate or minimize the exposure of humans and ecosystems to potential contaminants. With the number of environmental contaminants increasing annually – more than 2000 new chemical...

  7. Cervical apron flap reconstruction: a technique for second-stage revision.

    PubMed

    Spiro, R H; Chaglassian, T A

    1979-08-01

    A technique for second-stage revision of a cervical apron flap is described. Food particle retention and pocketing in hair-bearing recesses can be minimized by accurately trimming and contouring the flap to fit smoothly into the oral defect.

  8. Reconstruction of lightning channel geometry by localizing thunder sources

    NASA Astrophysics Data System (ADS)

    Bodhika, J. A. P.; Dharmarathna, W. G. D.; Fernando, Mahendra; Cooray, Vernon

    2013-09-01

    Thunder is generated as a result of a shock wave created by sudden expansion of air in the lightning channel due to high temperature variations. Even though the highest amplitudes of thunder signatures are generated at the return stroke stage, thunder signals generated at other events such as preliminary breakdown pulses also can be of amplitudes which are large enough to record using a sensitive system. In this study, it was attempted to reconstruct the lightning channel geometry of cloud and ground flashes by locating the temporal and spatial variations of thunder sources. Six lightning flashes were reconstructed using the recorded thunder signatures. Possible effects due to atmospheric conditions were neglected. Numerical calculations suggest that the time resolution of the recorded signal and 10 ms-1error in speed of sound leads to 2% and 3% errors, respectively, in the calculated coordinates. Reconstructed channel geometries for cloud and ground flashes agreed with the visual observations. Results suggest that the lightning channel can be successfully reconstructed using this technique.

  9. Autogenous Bone Reconstruction of Large Secondary Skull Defects.

    PubMed

    Fearon, Jeffrey A; Griner, Devan; Ditthakasem, Kanlaya; Herbert, Morley

    2017-02-01

    The authors sought to ascertain the upper limits of secondary skull defect size amenable to autogenous reconstructions and to examine outcomes of a surgical series. Published data for autogenous and alloplastic skull reconstructions were also examined to explore associations that might guide treatment. A retrospective review of autogenously reconstructed secondary skull defects was undertaken. A structured literature review was also performed to assess potential differences in reported outcomes between autogenous bone and synthetic alloplastic skull reconstructions. Weighted risks were calculated for statistical testing. Ninety-six patients underwent autogenous skull reconstruction for an average defect size of 93 cm (range, 4 to 506 cm) at a mean age of 12.9 years. The mean operative time was 3.4 hours, 2 percent required allogeneic blood transfusions, and the average length of stay was less than 3 days. The mean length of follow-up was 28 months. There were no postoperative infections requiring surgery, but one patient underwent secondary grafting for partial bone resorption. An analysis of 34 studies revealed that complications, infections, and reoperations were more commonly reported with alloplastic than with autogenous reconstructions (relative risk, 1.57, 4.8, and 1.48, respectively). Autogenous reconstructions are feasible, with minimal associated morbidity, for patients with skull defect sizes as large as 500 cm. A structured literature review suggests that autogenous bone reconstructions are associated with lower reported infection, complication, and reoperation rates compared with synthetic alloplasts. Based on these findings, surgeons might consider using autogenous reconstructions even for larger skull defects. Therapeutic, IV.

  10. Diffraction based method to reconstruct the spectrum of the Thomson scattering x-ray source

    NASA Astrophysics Data System (ADS)

    Chi, Zhijun; Yan, Lixin; Zhang, Zhen; Zhou, Zheng; Zheng, Lianmin; Wang, Dong; Tian, Qili; Wang, Wei; Nie, Zan; Zhang, Jie; Du, Yingchao; Hua, Jianfei; Shi, Jiaru; Pai, Chihao; Lu, Wei; Huang, Wenhui; Chen, Huaibi; Tang, Chuanxiang

    2017-04-01

    As Thomson scattering x-ray sources based on the collision of intense laser and relativistic electrons have drawn much attention in various scientific fields, there is an increasing demand for the effective methods to reconstruct the spectrum information of the ultra-short and high-intensity x-ray pulses. In this paper, a precise spectrum measurement method for the Thomson scattering x-ray sources was proposed with the diffraction of a Highly Oriented Pyrolytic Graphite (HOPG) crystal and was demonstrated at the Tsinghua Thomson scattering X-ray source. The x-ray pulse is diffracted by a 15 mm (L) ×15 mm (H)× 1 mm (D) HOPG crystal with 1° mosaic spread. By analyzing the diffraction pattern, both x-ray peak energies and energy spectral bandwidths at different polar angles can be reconstructed, which agree well with the theoretical value and simulation. The higher integral reflectivity of the HOPG crystal makes this method possible for single-shot measurement.

  11. Diffraction based method to reconstruct the spectrum of the Thomson scattering x-ray source.

    PubMed

    Chi, Zhijun; Yan, Lixin; Zhang, Zhen; Zhou, Zheng; Zheng, Lianmin; Wang, Dong; Tian, Qili; Wang, Wei; Nie, Zan; Zhang, Jie; Du, Yingchao; Hua, Jianfei; Shi, Jiaru; Pai, Chihao; Lu, Wei; Huang, Wenhui; Chen, Huaibi; Tang, Chuanxiang

    2017-04-01

    As Thomson scattering x-ray sources based on the collision of intense laser and relativistic electrons have drawn much attention in various scientific fields, there is an increasing demand for the effective methods to reconstruct the spectrum information of the ultra-short and high-intensity x-ray pulses. In this paper, a precise spectrum measurement method for the Thomson scattering x-ray sources was proposed with the diffraction of a Highly Oriented Pyrolytic Graphite (HOPG) crystal and was demonstrated at the Tsinghua Thomson scattering X-ray source. The x-ray pulse is diffracted by a 15 mm (L) ×15 mm (H)× 1 mm (D) HOPG crystal with 1° mosaic spread. By analyzing the diffraction pattern, both x-ray peak energies and energy spectral bandwidths at different polar angles can be reconstructed, which agree well with the theoretical value and simulation. The higher integral reflectivity of the HOPG crystal makes this method possible for single-shot measurement.

  12. Technique of nonvascularized toe phalangeal transfer and distraction lengthening in the treatment of multiple digit symbrachydactyly.

    PubMed

    Netscher, David T; Lewis, Eric V

    2008-06-01

    A combination of nonvascularized multiple toe phalangeal transfers, web space deepening, and distraction lengthening may provide excellent function in the child born with the oligodactylous type of symbrachydactyly. These techniques may reconstruct multiple digits, maintaining a wide and stable grip span with good prehension to the thumb. We detail the techniques of each of these 3 stages in reconstruction and describe appropriate patient selection. Potential complications are discussed. However, with strict attention to technical details, these complications can be minimized.

  13. Endless cold: a seasonal reconstruction of temperature and precipitation in the Burgundian Low Countries during the 15th century based on documentary evidence

    NASA Astrophysics Data System (ADS)

    Camenisch, C.

    2015-03-01

    This paper applies the methods of historical climatology to present a climate reconstruction for the area of the Burgundian Low Countries during the 15th century. The results are based on documentary evidence that has been handled very carefully, especially with regard to the distinction between contemporary and non-contemporary sources. Approximately 3000 written records deriving from about 100 different sources were examined and converted into seasonal seven-degree indices for temperature and precipitation. For the Late Middle Ages only a few climate reconstructions exist. There are even fewer reconstructions which include winter and autumn temperature or precipitation at all. This paper therefore constitutes a useful contribution to the understanding of climate and weather conditions in the less well researched but highly interesting 15th century.

  14. Adaptive zooming in X-ray computed tomography.

    PubMed

    Dabravolski, Andrei; Batenburg, Kees Joost; Sijbers, Jan

    2014-01-01

    In computed tomography (CT), the source-detector system commonly rotates around the object in a circular trajectory. Such a trajectory does not allow to exploit a detector fully when scanning elongated objects. Increase the spatial resolution of the reconstructed image by optimal zooming during scanning. A new approach is proposed, in which the full width of the detector is exploited for every projection angle. This approach is based on the use of prior information about the object's convex hull to move the source as close as possible to the object, while avoiding truncation of the projections. Experiments show that the proposed approach can significantly improve reconstruction quality, producing reconstructions with smaller errors and revealing more details in the object. The proposed approach can lead to more accurate reconstructions and increased spatial resolution in the object compared to the conventional circular trajectory.

  15. Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data

    DTIC Science & Technology

    2009-05-01

    operational warning and reporting (information) systems that combine automated data acquisition, analysis , source reconstruction, display and distribution of...report and to incorporate this operational ca- pability into the integrative multiscale urban modeling system implemented in the com- putational...Journal of Fluid Mechanics, 180, 529–556. [27] Flesch, T., Wilson, J. D., and Yee, E. (1995), Backward- time Lagrangian stochastic dispersion models

  16. Use of the 3D surgical modelling technique with open-source software for mandibular fibula free flap reconstruction and its surgical guides.

    PubMed

    Ganry, L; Hersant, B; Quilichini, J; Leyder, P; Meningaud, J P

    2017-06-01

    Tridimensional (3D) surgical modelling is a necessary step to create 3D-printed surgical tools, and expensive professional software is generally needed. Open-source software are functional, reliable, updated, may be downloaded for free and used to produce 3D models. Few surgical teams have used free solutions for mastering 3D surgical modelling for reconstructive surgery with osseous free flaps. We described an Open-source software 3D surgical modelling protocol to perform a fast and nearly free mandibular reconstruction with microvascular fibula free flap and its surgical guides, with no need for engineering support. Four successive specialised Open-source software were used to perform our 3D modelling: OsiriX ® , Meshlab ® , Netfabb ® and Blender ® . Digital Imaging and Communications in Medicine (DICOM) data on patient skull and fibula, obtained with a computerised tomography (CT) scan, were needed. The 3D modelling of the reconstructed mandible and its surgical guides were created. This new strategy may improve surgical management in Oral and Craniomaxillofacial surgery. Further clinical studies are needed to demonstrate the feasibility, reproducibility, transfer of know how and benefits of this technique. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  17. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T; Cooper, Benjamin J; Kuncic, Zdenka; Keall, Paul J

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan, and was compared to FDK, ASD-POCS, and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS, and did not suffer from residual noise/streaking and motion blur migrated from the prior image as in PICCS. AAIR was also found to be more computationally efficient than both ASD-POCS and PICCS, with a reduction in computation time of over 50% compared to ASD-POCS. The use of anatomy segmentation was, for the first time, demonstrated to significantly improve image quality and computational efficiency for thoracic 4D CBCT reconstruction. Further developments are required to facilitate AAIR for practical use. PMID:25565244

  18. Deep Wideband Single Pointings and Mosaics in Radio Interferometry: How Accurately Do We Reconstruct Intensities and Spectral Indices of Faint Sources?

    NASA Astrophysics Data System (ADS)

    Rau, U.; Bhatnagar, S.; Owen, F. N.

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

  19. Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.

    2012-01-01

    Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906

  20. Functional Performance Among Active Female Soccer Players After Unilateral Primary Anterior Cruciate Ligament Reconstruction Compared With Knee-Healthy Controls.

    PubMed

    Fältström, Anne; Hägglund, Martin; Kvist, Joanna

    2017-02-01

    Good functional performance with limb symmetry is believed to be important to minimize the risk of injury after a return to pivoting and contact sports after anterior cruciate ligament reconstruction (ACLR). This study aimed to investigate any side-to-side limb differences in functional performance and movement asymmetries in female soccer players with a primary unilateral anterior cruciate ligament (ACL)-reconstructed knee and to compare these players with knee-healthy controls from the same soccer teams. Cross-sectional study; Level of evidence, 3. This study included 77 active female soccer players at a median of 18 months after ACLR (interquartile range [IQR], 14.5 months; range, 7-39 months) and 77 knee-healthy female soccer players. The mean age was 20.1 ± 2.3 years for players with an ACL-reconstructed knee and 19.5 ± 2.2 years for controls. We used a battery of tests to assess postural control (Star Excursion Balance Test) and hop performance (1-legged hop for distance, 5-jump test, and side hop). Movement asymmetries in the lower limbs and trunk were assessed with the drop vertical jump and the tuck jump using 2-dimensional analyses. The reconstructed and uninvolved limbs did not differ in any of the tests. In the 5-jump test, players with an ACL-reconstructed knee performed worse than controls (mean 8.75 ± 1.05 m vs 9.09 ± 0.89 m; P = .034). On the drop vertical jump test, the ACL-reconstructed limb had significantly less knee valgus motion in the frontal plane (median 0.028 m [IQR, 0.049 m] vs 0.045 m [IQR, 0.043 m]; P = .004) and a lower probability of a high knee abduction moment (pKAM) (median 69.2% [IQR, 44.4%] vs 79.8% [IQR, 44.8%]; P = .043) compared with the control players' matched limb (for leg dominance). Results showed that 9% to 49% of players in both groups performed outside recommended guidelines on the different tests. Only 14 players with an ACL-reconstructed knee (18%) and 15 controls (19%) had results that met the recommended guidelines for all 5 tests ( P = .837). The reconstructed and uninvolved limbs did not differ, and players with an ACL-reconstructed knee and controls differed only minimally on the functional performance tests, indicating similar function. It is worth noting that many players with an ACL-reconstructed knee and controls had movement asymmetries and a high pKAM pattern, which have previously been associated with an increased risk for both primary and secondary ACL injury in female athletes.

  1. Localization and spectral isolation of special nuclear material using stochastic image reconstruction

    NASA Astrophysics Data System (ADS)

    Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Clarke, S. D.; Pozzi, S. A.

    2017-01-01

    In this work we present a technique for isolating the gamma-ray and neutron energy spectra from multiple radioactive sources localized in an image. Image reconstruction algorithms for radiation scatter cameras typically focus on improving image quality. However, with scatter cameras being developed for non-proliferation applications, there is a need for not only source localization but also source identification. This work outlines a modified stochastic origin ensembles algorithm that provides localized spectra for all pixels in the image. We demonstrated the technique by performing three experiments with a dual-particle imager that measured various gamma-ray and neutron sources simultaneously. We showed that we could isolate the peaks from 22Na and 137Cs and that the energy resolution is maintained in the isolated spectra. To evaluate the spectral isolation of neutrons, a 252Cf source and a PuBe source were measured simultaneously and the reconstruction showed that the isolated PuBe spectrum had a higher average energy and a greater fraction of neutrons at higher energies than the 252Cf. Finally, spectrum isolation was used for an experiment with weapons grade plutonium, 252Cf, and AmBe. The resulting neutron and gamma-ray spectra showed the expected characteristics that could then be used to identify the sources.

  2. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  3. Quantifying the impact of immediate reconstruction in postmastectomy radiation: a large, dose-volume histogram-based analysis.

    PubMed

    Ohri, Nisha; Cordeiro, Peter G; Keam, Jennifer; Ballangrud, Ase; Shi, Weiji; Zhang, Zhigang; Nerbun, Claire T; Woch, Katherine M; Stein, Nicholas F; Zhou, Ying; McCormick, Beryl; Powell, Simon N; Ho, Alice Y

    2012-10-01

    To assess the impact of immediate breast reconstruction on postmastectomy radiation (PMRT) using dose-volume histogram (DVH) data. Two hundred forty-seven women underwent PMRT at our center, 196 with implant reconstruction and 51 without reconstruction. Patients with reconstruction were treated with tangential photons, and patients without reconstruction were treated with en-face electron fields and customized bolus. Twenty percent of patients received internal mammary node (IMN) treatment. The DVH data were compared between groups. Ipsilateral lung parameters included V20 (% volume receiving 20 Gy), V40 (% volume receiving 40 Gy), mean dose, and maximum dose. Heart parameters included V25 (% volume receiving 25 Gy), mean dose, and maximum dose. IMN coverage was assessed when applicable. Chest wall coverage was assessed in patients with reconstruction. Propensity-matched analysis adjusted for potential confounders of laterality and IMN treatment. Reconstruction was associated with lower lung V20, mean dose, and maximum dose compared with no reconstruction (all P<.0001). These associations persisted on propensity-matched analysis (all P<.0001). Heart doses were similar between groups (P=NS). Ninety percent of patients with reconstruction had excellent chest wall coverage (D95 >98%). IMN coverage was superior in patients with reconstruction (D95 >92.0 vs 75.7%, P<.001). IMN treatment significantly increased lung and heart parameters in patients with reconstruction (all P<.05) but minimally affected those without reconstruction (all P>.05). Among IMN-treated patients, only lower lung V20 in those without reconstruction persisted (P=.022), and mean and maximum heart doses were higher than in patients without reconstruction (P=.006, P=.015, respectively). Implant reconstruction does not compromise the technical quality of PMRT when the IMNs are untreated. Treatment technique, not reconstruction, is the primary determinant of target coverage and normal tissue doses. Published by Elsevier Inc.

  4. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  5. Time-jittered marine seismic data acquisition via compressed sensing and sparsity-promoting wavefield reconstruction

    NASA Astrophysics Data System (ADS)

    Wason, H.; Herrmann, F. J.; Kumar, R.

    2016-12-01

    Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.

  6. The Herschel-ATLAS: magnifications and physical sizes of 500-μm-selected strongly lensed galaxies

    NASA Astrophysics Data System (ADS)

    Enia, A.; Negrello, M.; Gurwell, M.; Dye, S.; Rodighiero, G.; Massardi, M.; De Zotti, G.; Franceschini, A.; Cooray, A.; van der Werf, P.; Birkinshaw, M.; Michałowski, M. J.; Oteo, I.

    2018-04-01

    We perform lens modelling and source reconstruction of Sub-millimetre Array (SMA) data for a sample of 12 strongly lensed galaxies selected at 500μm in the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). A previous analysis of the same data set used a single Sérsic profile to model the light distribution of each background galaxy. Here we model the source brightness distribution with an adaptive pixel scale scheme, extended to work in the Fourier visibility space of interferometry. We also present new SMA observations for seven other candidate lensed galaxies from the H-ATLAS sample. Our derived lens model parameters are in general consistent with previous findings. However, our estimated magnification factors, ranging from 3 to 10, are lower. The discrepancies are observed in particular where the reconstructed source hints at the presence of multiple knots of emission. We define an effective radius of the reconstructed sources based on the area in the source plane where emission is detected above 5σ. We also fit the reconstructed source surface brightness with an elliptical Gaussian model. We derive a median value reff ˜ 1.77 kpc and a median Gaussian full width at half-maximum ˜1.47 kpc. After correction for magnification, our sources have intrinsic star formation rates (SFR) ˜ 900-3500 M⊙ yr-1, resulting in a median SFR surface density ΣSFR ˜ 132 M⊙ yr-1 kpc-2 (or ˜218 M⊙ yr-1 kpc-2 for the Gaussian fit). This is consistent with that observed for other star-forming galaxies at similar redshifts, and is significantly below the Eddington limit for a radiation pressure regulated starburst.

  7. Comparison of 3D reconstruction of mandible for pre-operative planning using commercial and open-source software

    NASA Astrophysics Data System (ADS)

    Abdullah, Johari Yap; Omar, Marzuki; Pritam, Helmi Mohd Hadi; Husein, Adam; Rajion, Zainul Ahmad

    2016-12-01

    3D printing of mandible is important for pre-operative planning, diagnostic purposes, as well as for education and training. Currently, the processing of CT data is routinely performed with commercial software which increases the cost of operation and patient management for a small clinical setting. Usage of open-source software as an alternative to commercial software for 3D reconstruction of the mandible from CT data is scarce. The aim of this study is to compare two methods of 3D reconstruction of the mandible using commercial Materialise Mimics software and open-source Medical Imaging Interaction Toolkit (MITK) software. Head CT images with a slice thickness of 1 mm and a matrix of 512x512 pixels each were retrieved from the server located at the Radiology Department of Hospital Universiti Sains Malaysia. The CT data were analysed and the 3D models of mandible were reconstructed using both commercial Materialise Mimics and open-source MITK software. Both virtual 3D models were saved in STL format and exported to 3matic and MeshLab software for morphometric and image analyses. Both models were compared using Wilcoxon Signed Rank Test and Hausdorff Distance. No significant differences were obtained between the 3D models of the mandible produced using Mimics and MITK software. The 3D model of the mandible produced using MITK open-source software is comparable to the commercial MIMICS software. Therefore, open-source software could be used in clinical setting for pre-operative planning to minimise the operational cost.

  8. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet.

    PubMed

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-02-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.

  9. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  10. 42 CFR 82.13 - What sources of information may be used for dose reconstructions?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER... from health research on DOE worker populations; (c) Interviews and records provided by claimants; (d...

  11. 40 CFR 63.1158 - Emission standards for new or reconstructed sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Process Facilities and Hydrochloric Acid Regeneration Plants § 63.1158 Emission standards for new or... percent. (b) Hydrochloric acid regeneration plants. (1) No owner or operator of a new or reconstructed...

  12. 40 CFR 63.1158 - Emission standards for new or reconstructed sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Process Facilities and Hydrochloric Acid Regeneration Plants § 63.1158 Emission standards for new or... percent. (b) Hydrochloric acid regeneration plants. (1) No owner or operator of a new or reconstructed...

  13. 40 CFR 63.1158 - Emission standards for new or reconstructed sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Process Facilities and Hydrochloric Acid Regeneration Plants § 63.1158 Emission standards for new or... percent. (b) Hydrochloric acid regeneration plants. (1) No owner or operator of a new or reconstructed...

  14. 40 CFR 63.1158 - Emission standards for new or reconstructed sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Process Facilities and Hydrochloric Acid Regeneration Plants § 63.1158 Emission standards for new or... percent. (b) Hydrochloric acid regeneration plants. (1) No owner or operator of a new or reconstructed...

  15. Joint reconstruction of PET-MRI by exploiting structural similarity

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Matthias J.; Thielemans, Kris; Pizarro, Luis; Atkinson, David; Ourselin, Sébastien; Hutton, Brian F.; Arridge, Simon R.

    2015-01-01

    Recent advances in technology have enabled the combination of positron emission tomography (PET) with magnetic resonance imaging (MRI). These PET-MRI scanners simultaneously acquire functional PET and anatomical or functional MRI data. As function and anatomy are not independent of one another the images to be reconstructed are likely to have shared structures. We aim to exploit this inherent structural similarity by reconstructing from both modalities in a joint reconstruction framework. The structural similarity between two modalities can be modelled in two different ways: edges are more likely to be at similar positions and/or to have similar orientations. We analyse the diffusion process generated by minimizing priors that encapsulate these different models. It turns out that the class of parallel level set priors always corresponds to anisotropic diffusion which is sometimes forward and sometimes backward diffusion. We perform numerical experiments where we jointly reconstruct from blurred Radon data with Poisson noise (PET) and under-sampled Fourier data with Gaussian noise (MRI). Our results show that both modalities benefit from each other in areas of shared edge information. The joint reconstructions have less artefacts and sharper edges compared to separate reconstructions and the ℓ2-error can be reduced in all of the considered cases of under-sampling.

  16. Evaluating the effect of increased pitch, iterative reconstruction and dual source CT on dose reduction and image quality.

    PubMed

    Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier

    2018-06-14

    To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.

  17. SU-C-303-04: Evaluation of On- and Off-Line Bioluminescence Tomography System for Focal Irradiation Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Wang, K; Reyes, J

    Purpose: We have developed offline and on-board bioluminescence tomography(BLT) systems for the small animal radiation research platform(SARRP) for radiation guidance of soft tissue targets. We investigated the effectiveness of offline BLT guidance. Methods: CBCT is equipped on both the offline BLT system and SARRP that are 10 ft. apart. To evaluate the setup error during animal transport between the two systems, we implanted a luminescence source in the abdomen of anesthetized mice. Five mice were studied. After CBCT was acquired on both systems, source centers and correlation coefficients were calculated. CBCT was also used to generate object mesh for BLTmore » reconstruction. To assess target localization, we compared the localization of the luminescence source based on (1)on-board SARRP BLT and CBCT, (2)offline BLT and CBCT, and (3)offline BLT and SARRP CBCT. The 3rd comparison examines if an offline BLT system can be used to guide radiation when there is minimal target contrast in CBCT. Results: Our CBCT results show the offset of the light source center can be maintained within 0.2 mm during animal transport. The center of mass(CoM) of the light source reconstructed by the offline BLT has an offset of 1.0 ± 0.4 mm from the ‘true’ CoM as derived from the SARRP CBCT. The results compare well with the offset of 1.0 ± 0.2 mm using on-line BLT. Conclusion: With CBCT information provided by the SARRP and effective animal immobilization during transport, these findings support the use of offline BLT in close vicinity for accurate soft tissue target localization for irradiation. However, the disadvantage of the off-line system is reduced efficiency as care is required to maintain stable animal transport. We envisage a dual use system where the on-board arrangement allows convenient access to CBCT and avoids disturbance of animal setup. The off-line capability would support standalone longitudinal imaging studies. The work is supported by NIH R01CA158100 and Xstrahl Ltd. Drs. John Wong and Iulian Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University. John Wong also has a consultant agreement with Xstrahl Ltd.« less

  18. Weighted spline based integration for reconstruction of freeform wavefront.

    PubMed

    Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra

    2018-02-10

    In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.

  19. X-ray dose reduction in abdominal computed tomography using advanced iterative reconstruction algorithms.

    PubMed

    Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua

    2014-01-01

    This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.

  20. Polyethylene Ear Reconstruction: A State-of-the-Art Surgical Journey.

    PubMed

    Reinisch, John; Tahiri, Youssef

    2018-02-01

    The use of a porous high-density polyethylene implant for ear reconstruction is gradually gaining acceptance because it allows for a pleasing ear reconstruction in young children before they enter school. In response to this growing interest, the authors decided to write an article clarifying in detail all the steps of this challenging procedure. In this article, the authors also answer all the common questions that surgeons have when they come to observe the operation, or when they go back to their respective practices and start performing this procedure. The authors describe in detail the operative steps that allow for a successful ear reconstruction using porous high-density polyethylene. The key parts of this operation are to meticulously harvest a well-vascularized superficial temporoparietal fascia flap and to use appropriate color-matched skin grafts. This method allows for a pleasing ear reconstruction with excellent definition, projection, symmetry, and long-term viability. The use of porous high-density polyethylene with a thin superficial temporoparietal fascia flap coverage is the authors' preferred method of ear reconstruction because it can be performed at an earlier age, in a single stage, as an outpatient procedure, and with minimal discomfort and psychological trauma for the patients and parents.

  1. Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost

    PubMed Central

    Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor

    2009-01-01

    Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321

  2. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  3. Interstitial devices for treating deep seated tumors

    NASA Astrophysics Data System (ADS)

    Lafon, Cyril; Cathignol, Dominique; Prat, Frédéric; Melodelima, David; Salomir, Rares; Theillère, Yves; Chapelon, Jean-Yves

    2006-05-01

    Techniques using intracavitary or interstitial applicators have been proposed because extracorporeal HIFU techniques are not always suitable for deep-seated tumors. Bones or gaseous pockets may indeed be located in the intervening tissue. The objective is to bring the ultrasound source as close as possible to the target through natural routes in order to minimize the effects of attenuation and phase aberration along the ultrasound pathway. Under these circumstances, it becomes possible to use higher frequency, thus increasing the ultrasonic absorption coefficient and resulting in more efficient heating of the treatment region. In contrast to extra-corporeal applicators, the design of interstitial probes imposes additional constraints relative to size and ergonomy. The goal of this paper is to present the range of miniature interstitial applicators we developed at INSERM for various applications. The sources are rotating plane water-cooled transducers that operate at a frequency between 3 and 10 MHz depending on the desired therapeutic depth. The choice of a plane transducer rather than divergent sources permits to extend the therapeutic depth and to enhance the angular selectivity of the treatment Rotating single element flat transducer can also be replaced by cylindrical arrays for rotating electronically a reconstructed plane wave. When extended zone of coagulation are required, original therapeutic modalities combining cavitation and thermal effects are used. These methods consist in favoring in depth heating by increasing the acoustic attenuation away from the transducer with the presence of bubbles. When associated to modern imaging modalities, these minimally invasive therapeutic devices offer very promising options for cancer treatment. For examples, two versions of an image-guided esophageal applicator are designed: one uses a retractable ultrasound mini probe for the positioning of the applicator, while the other is MRI compatible and offers on line monitoring of temperature. Beyond these engineering considerations, our clinical experience demonstrates that following interstitial routes for applying HIFU is an interesting therapeutic option when targeted sites cannot be reached from outside the patient.

  4. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  5. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    NASA Astrophysics Data System (ADS)

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-01

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  6. Sport-specific outcomes after anterior cruciate ligament reconstruction.

    PubMed

    Warner, Stephen J; Smith, Matthew V; Wright, Rick W; Matava, Matthew J; Brophy, Robert H

    2011-08-01

    Although anterior cruciate ligament (ACL) reconstruction has been studied extensively in the literature, sport-specific outcomes have not been well-documented. The purpose of this systematic review was to assess sport-specific outcomes after ACL reconstruction in the literature. We performed a systematic review of the literature to identify studies reporting sport-specific outcomes after primary ACL reconstruction. Included studies were required to have reported standardized outcomes after primary ACL reconstruction for a single sport or comparing between different sports. In total 8 studies conformed to all inclusion criteria: 2 Level II studies, 1 Level III study, and 5 Level IV case series. Only 1 study reported comparisons of standardized outcomes between different sports, whereas 7 studies reported standardized outcomes in a single sport. Return to activity was the most common sport-specific outcome reported and varied from 19% (soccer) to 100% (bicycling and rugby), although the methods of measuring this outcome differed. Whereas return to activity after ACL reconstruction appears more likely for bicycling and jogging than for cutting and pivoting sports such as soccer and football, the literature on sport-specific outcomes from ACL reconstruction is limited with minimal data. Further studies are needed to report sport-specific outcomes and return to play after ACL reconstruction. Level IV, systematic review of Level II, III, and IV studies. Copyright © 2011 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  7. Files synchronization from a large number of insertions and deletions

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Kumari, Savera

    2017-11-01

    Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.

  8. A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source

    PubMed Central

    Atwood, Robert C.; Bodey, Andrew J.; Price, Stephen W. T.; Basham, Mark; Drakopoulos, Michael

    2015-01-01

    Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an ‘orthogonal’ fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and ‘facility-independent’: it can run on standard cluster infrastructure at any institution. PMID:25939626

  9. Elastic cartilage reconstruction by transplantation of cultured hyaline cartilage-derived chondrocytes.

    PubMed

    Mizuno, M; Takebe, T; Kobayashi, S; Kimura, S; Masutani, M; Lee, S; Jo, Y H; Lee, J I; Taniguchi, H

    2014-05-01

    Current surgical intervention of craniofacial defects caused by injuries or abnormalities uses reconstructive materials, such as autologous cartilage grafts. Transplantation of autologous tissues, however, places a significant invasiveness on patients, and many efforts have been made for establishing an alternative graft. Recently, we and others have shown the potential use of reconstructed elastic cartilage from ear-derived chondrocytes or progenitors with the unique elastic properties. Here, we examined the differentiation potential of canine joint cartilage-derived chondrocytes into elastic cartilage for expanding the cell sources, such as hyaline cartilage. Articular chondrocytes are isolated from canine joint, cultivated, and compared regarding characteristic differences with auricular chondrocytes, including proliferation rates, gene expression, extracellular matrix production, and cartilage reconstruction capability after transplantation. Canine articular chondrocytes proliferated less robustly than auricular chondrocytes, but there was no significant difference in the amount of sulfated glycosaminoglycan produced from redifferentiated chondrocytes. Furthermore, in vitro expanded and redifferentiated articular chondrocytes have been shown to reconstruct elastic cartilage on transplantation that has histologic characteristics distinct from hyaline cartilage. Taken together, cultured hyaline cartilage-derived chondrocytes are a possible cell source for elastic cartilage reconstruction. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  10. Typology of historical sources and the reconstruction of long-term historical changes of riverine fish: a case study of the Austrian Danube and northern Russian rivers

    PubMed Central

    Haidvogl, Gertrud; Lajus, Dmitry; Pont, Didier; Schmid, Martin; Jungwirth, Mathias; Lajus, Julia

    2014-01-01

    Historical data are widely used in river ecology to define reference conditions or to investigate the evolution of aquatic systems. Most studies rely on printed documents from the 19th century, thus missing pre-industrial states and human impacts. This article discusses historical sources that can be used to reconstruct the development of riverine fish communities from the Late Middle Ages until the mid-20th century. Based on the studies of the Austrian Danube and northern Russian rivers, we propose a classification scheme of printed and archival sources and describe their fish ecological contents. Five types of sources were identified using the origin of sources as the first criterion: (i) early scientific surveys, (ii) fishery sources, (iii) fish trading sources, (iv) fish consumption sources and (v) cultural representations of fish. Except for early scientific surveys, all these sources were produced within economic and administrative contexts. They did not aim to report about historical fish communities, but do contain information about commercial fish and their exploitation. All historical data need further analysis for a fish ecological interpretation. Three case studies from the investigated Austrian and Russian rivers demonstrate the use of different source types and underline the necessity for a combination of different sources and a methodology combining different disciplinary approaches. Using a large variety of historical sources to reconstruct the development of past fish ecological conditions can support future river management by going beyond the usual approach of static historical reference conditions. PMID:25284959

  11. Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.

    PubMed

    Majumdar, Angshul

    2013-06-01

    In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (0

  12. Impact of the Femoral Head Position on Moment Arms in Total Hip Arthroplasty: A Parametric Finite Element Study.

    PubMed

    Rüdiger, Hannes A; Parvex, Valérie; Terrier, Alexandre

    2016-03-01

    Although the importance of accurate femoral reconstruction to achieve a good functional outcome is well documented, quantitative data on the effects of a displacement of the femoral center of rotation on moment arms are scarce. The purpose of this study was to calculate moment arms after nonanatomical femoral reconstruction. Finite element models of 15 patients including the pelvis, the femur, and the gluteal muscles were developed. Moment arms were calculated within the native anatomy and compared to distinct displacement of the femoral center of rotation (leg lengthening of 10 mm, loss of femoral offset of 20%, anteversion ±10°, and fixed anteversion at 15°). Calculations were performed within the range of motion observed during a normal gait cycle. Although with all evaluated displacements of the femoral center of rotation, the abductor moment arm remained positive, some fibers initially contributing to extension became antagonists (flexors) and vice versa. A loss of 20% of femoral offset led to an average decrease of 15% of abductor moment. Femoral lengthening and changes in femoral anteversion (±10°, fixed at 15°) led to minimal changes in abductor moment arms (maximum change of 5%). Native femoral anteversion correlated with the changes in moment arms induced by the 5 variations of reconstruction. Accurate reconstruction of offset is important to maintaining abductor moment arms, while changes of femoral rotation had minimal effects. Patients with larger native femoral anteversion appear to be more susceptible to femoral head displacements. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Application of blind source separation to real-time dissolution dynamic nuclear polarization.

    PubMed

    Hilty, Christian; Ragavan, Mukundan

    2015-01-20

    The use of a blind source separation (BSS) algorithm is demonstrated for the analysis of time series of nuclear magnetic resonance (NMR) spectra. This type of data is obtained commonly from experiments, where analytes are hyperpolarized using dissolution dynamic nuclear polarization (D-DNP), both in in vivo and in vitro contexts. High signal gains in D-DNP enable rapid measurement of data sets characterizing the time evolution of chemical or metabolic processes. BSS is based on an algorithm that can be applied to separate the different components contributing to the NMR signal and determine the time dependence of the signals from these components. This algorithm requires minimal prior knowledge of the data, notably, no reference spectra need to be provided, and can therefore be applied rapidly. In a time-resolved measurement of the enzymatic conversion of hyperpolarized oxaloacetate to malate, the two signal components are separated into computed source spectra that closely resemble the spectra of the individual compounds. An improvement in the signal-to-noise ratio of the computed source spectra is found compared to the original spectra, presumably resulting from the presence of each signal more than once in the time series. The reconstruction of the original spectra yields the time evolution of the contributions from the two sources, which also corresponds closely to the time evolution of integrated signal intensities from the original spectra. BSS may therefore be an approach for the efficient identification of components and estimation of kinetics in D-DNP experiments, which can be applied at a high level of automation.

  14. MR images from fewer data

    NASA Astrophysics Data System (ADS)

    Vafadar, Bahareh; Bones, Philip J.

    2012-10-01

    There is a strong motivation to reduce the amount of acquired data necessary to reconstruct clinically useful MR images, since less data means faster acquisition sequences, less time for the patient to remain motionless in the scanner and better time resolution for observing temporal changes within the body. We recently introduced an improvement in image quality for reconstructing parallel MR images by incorporating a data ordering step with compressed sensing (CS) in an algorithm named `PECS'. That method requires a prior estimate of the image to be available. We are extending the algorithm to explore ways of utilizing the data ordering step without requiring a prior estimate. The method presented here first reconstructs an initial image x1 by compressed sensing (with scarcity enhanced by SVD), then derives a data ordering from x1, R'1 , which ranks the voxels of x1 according to their value. A second reconstruction is then performed which incorporates minimization of the first norm of the estimate after ordering by R'1 , resulting in a new reconstruction x2. Preliminary results are encouraging.

  15. Dual-dermal-barrier fashion flaps for the treatment of sacral pressure sores.

    PubMed

    Hsiao, Yen-Chang; Chuang, Shiow-Shuh

    2015-02-01

    The sacral region is one of the most vulnerable sites for the development of pressure sores. Even when surgical reconstruction is performed, there is a high chance of recurrence. Therefore, the concept of dual-dermal-barrier fashion flaps for sacral pressure sore reconstruction was proposed. From September 2007 to June 2010, nine patients with grade IV sacral pressures were enrolled. Four patients received bilateral myocutaneous V-Y flaps, four patients received bilateral fasciocutaneous V-Y flaps, and one patient received bilateral rotation-advanced flaps for sacral pressure reconstruction. The flaps were designed based on the perforators of the superior gluteal artery in one patient's reconstructive procedure. All flaps' designs were based on dual-dermal-barrier fashion. The mean follow-up time was 16 months (range = 12-25). No recurrence was noted. Only one patient had a complication of mild dehiscence at the middle suture line, occurring 2 weeks after the reconstructive surgery. The dual-dermal fashion flaps are easily duplicated and versatile. The study has shown minimal morbidity and a reasonable outcome.

  16. Reconstruction of phonon relaxation times from systems featuring interfaces with unknown properties

    NASA Astrophysics Data System (ADS)

    Forghani, Mojtaba; Hadjiconstantinou, Nicolas G.

    2018-05-01

    We present a method for reconstructing the phonon relaxation-time function τω=τ (ω ) (including polarization) and associated phonon free-path distribution from thermal spectroscopy data for systems featuring interfaces with unknown properties. Our method does not rely on the effective thermal-conductivity approximation or a particular physical model of the interface behavior. The reconstruction is formulated as an optimization problem in which the relaxation times are determined as functions of frequency by minimizing the discrepancy between the experimentally measured temperature profiles and solutions of the Boltzmann transport equation for the same system. Interface properties such as transmissivities are included as unknowns in the optimization; however, because for the thermal spectroscopy problems considered here the reconstruction is not very sensitive to the interface properties, the transmissivities are only approximately reconstructed and can be considered as byproducts of the calculation whose primary objective is the accurate determination of the relaxation times. The proposed method is validated using synthetic experimental data obtained from Monte Carlo solutions of the Boltzmann transport equation. The method is shown to remain robust in the presence of uncertainty (noise) in the measurement.

  17. Venous Graft for Full-thickness Palpebral Reconstruction

    PubMed Central

    Sanna, Marco Pietro Giuseppe; Maxia, Sara; Esposito, Salvatore; Di Giulio, Stefano; Sartore, Leonardo

    2015-01-01

    Summary: Full-thickness palpebral reconstruction is a challenge for most surgeons. The complex structures composing the eyelid must be reconstructed with care both for functional and cosmetic reasons. It is possible to find in literature different methods to reconstruct either the anterior or posterior lamella, based on graft or flaps. Most patients involved in this kind of surgery are elderly. It is important to use easy and fast procedures to minimize the length of the operation and its complications. In our department, we used to reconstruct the anterior lamella by means of a Tenzel or a Mustardé flap, whereas for the posterior lamella, we previously utilized a chondromucosal graft, harvested from nasal septum. Thus, these procedures required general anesthesia and long operatory time. We started using a vein graft for the posterior lamella. In this article, we present a series of 9 patients who underwent complex palpebral reconstruction for oncological reasons. In 5 patients (group A), we reconstructed the tarsoconjunctival layer by a chondromucosal graft, whereas in 4 patients (group B), we used a propulsive vein graft. The follow-up was from 10 to 20 months. The patient satisfaction was high, and we had no relapse in the series. In group A, we had more complications, including ectropion and septal perforations, whereas in group B, the operation was faster and we noted minor complications. In conclusion, the use of a propulsive vein to reconstruct the tarsoconjunctival layer was a reliable, safe, and fast procedure that can be considered in complex palpebral reconstructions. PMID:26034651

  18. The Efficacy of Simultaneous Breast Reconstruction and Contralateral Balancing Procedures in Reducing the Need for Second Stage Operations

    PubMed Central

    Clarke-Pearson, Emily M; Vornovitsky, Michael; Dayan, Joseph H; Samson, William; Sultan, Mark R

    2014-01-01

    Background Patients having unilateral breast reconstruction often require a second stage procedure on the contralateral breast to improve symmetry. In order to provide immediate symmetry and minimize the frequency and extent of secondary procedures, we began performing simultaneous contralateral balancing operations at the time of initial reconstruction. This study examines the indications, safety, and efficacy of this approach. Methods One-hundred and two consecutive breast reconstructions with simultaneous contralateral balancing procedures were identified. Data included patient age, body mass index (BMI), type of reconstruction and balancing procedure, specimen weight, transfusion requirement, complications and additional surgery under anesthesia. Unpaired t-tests were used to compare BMI, specimen weight and need for non-autologous transfusion. Results Average patient age was 48 years. The majority had autologous tissue-only reconstructions (94%) and the rest prosthesis-based reconstructions (6%). Balancing procedures included reduction mammoplasty (50%), mastopexy (49%), and augmentation mammoplasty (1%). Average BMI was 27 and average reduction specimen was 340 grams. Non-autologous blood transfusion rate was 9%. There was no relationship between BMI or reduction specimen weight and need for transfusion. We performed secondary surgery in 24% of the autologous group and 100% of the prosthesis group. Revision rate for symmetry was 13% in the autologous group and 17% in the prosthesis group. Conclusions Performing balancing at the time of breast reconstruction is safe and most effective in autologous reconstructions, where 87% did not require a second operation for symmetry. PMID:25276646

  19. CONNJUR Workflow Builder: A software integration environment for spectral reconstruction

    PubMed Central

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert; Martyn, Timothy O.; Ellis, Heidi J.C.; Gryk, Michael R.

    2015-01-01

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses. PMID:26066803

  20. Bioengineering a vaginal replacement using a small biopsy of autologous tissue.

    PubMed

    Dorin, Ryan P; Atala, Anthony; Defilippo, Roger E

    2011-01-01

    Many congenital and acquired diseases result in the absence of a normal vagina. Patients with these conditions often require reconstructive surgery to achieve satisfactory cosmesis and physiological function, and a variety of materials have been used as tissue sources. Currently employed graft materials such as collagen scaffolds and small intestine are not ideal in that they fail to mimic the physiology of normal vaginal tissue. Engineering of true vaginal tissue from a small biopsy of autologous vagina should produce a superior graft material for vaginal reconstruction. This review describes our current experience with the engineering of such tissue and its use for vaginal reconstruction in animal models. Our successful construction and implantation of neovaginas through tissue engineering techniques demonstrates the feasibility of similar endeavors in human patients. Additionally, the use of pluripotent stem cells instead of autologous tissue could provide an "off-the-shelf" tissue source for vaginal reconstruction.

Top