Sample records for minimum reconstruction error

  1. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  2. The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements

    DTIC Science & Technology

    2015-06-01

    of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation

  3. Data Assimilation in a Solar Dynamo Model Using Ensemble Kalman Filters: Sensitivity and Robustness in Reconstruction of Meridional Flow Speed

    NASA Astrophysics Data System (ADS)

    Dikpati, Mausumi; Anderson, Jeffrey L.; Mitra, Dhrubaditya

    2016-09-01

    We implement an Ensemble Kalman Filter procedure using the Data Assimilation Research Testbed for assimilating “synthetic” meridional flow-speed data in a Babcock-Leighton-type flux-transport solar dynamo model. By performing several “observing system simulation experiments,” we reconstruct time variation in meridional flow speed and analyze sensitivity and robustness of reconstruction. Using 192 ensemble members including 10 observations, each with 4% error, we find that flow speed is reconstructed best if observations of near-surface poloidal fields from low latitudes and tachocline toroidal fields from midlatitudes are assimilated. If observations include a mixture of poloidal and toroidal fields from different latitude locations, reconstruction is reasonably good for ≤slant 40 % error in low-latitude data, even if observational error in polar region data becomes 200%, but deteriorates when observational error increases in low- and midlatitude data. Solar polar region observations are known to contain larger errors than those in low latitudes; our forward operator (a flux-transport dynamo model here) can sustain larger errors in polar region data, but is more sensitive to errors in low-latitude data. An optimal reconstruction is obtained if an assimilation interval of 15 days is used; 10- and 20-day assimilation intervals also give reasonably good results. Assimilation intervals \\lt 5 days do not produce faithful reconstructions of flow speed, because the system requires a minimum time to develop dynamics to respond to flow variations. Reconstruction also deteriorates if an assimilation interval \\gt 45 days is used, because the system’s inherent memory interferes with its short-term dynamics during a substantially long run without updating.

  4. Multiphase computer-generated holograms for full-color image generation

    NASA Astrophysics Data System (ADS)

    Choi, Kyong S.; Choi, Byong S.; Choi, Yoon S.; Kim, Sun I.; Kim, Jong Man; Kim, Nam; Gil, Sang K.

    2002-06-01

    Multi-phase and binary-phase computer-generated holograms were designed and demonstrated for full-color image generation. Optimize a phase profile of the hologram that achieves each color image, we employed a simulated annealing method. The design binary phase hologram had the diffraction efficiency of 33.23 percent and the reconstruction error of 0.367 X 10-2. And eight phase hologram had the diffraction efficiency of 67.92 percent and the reconstruction error of 0.273 X 10-2. The designed BPH was fabricated by micro photolithographic technique with a minimum pixel width of 5micrometers . And the it was reconstructed using by two Ar-ion lasers and a He-Ne laser. In addition, the color dispersion characteristic of the fabricate grating and scaling problem of the reconstructed image were discussed.

  5. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  6. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  7. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  8. Validation of a fibula graft cutting guide for mandibular reconstruction: experiment with rapid prototyping mandible model.

    PubMed

    Lim, Se-Ho; Kim, Yeon-Ho; Kim, Moon-Key; Nam, Woong; Kang, Sang-Hoon

    2016-12-01

    We examined whether cutting a fibula graft with a surgical guide template, prepared with computer-aided design/computer-aided manufacturing (CAD/CAM), would improve the precision and accuracy of mandibular reconstruction. Thirty mandibular rapid prototype (RP) models were allocated to experimental (N = 15) and control (N = 15) groups. Thirty identical fibular RP models were assigned randomly, 15 to each group. For reference, we prepared a reconstructed mandibular RP model with a three-dimensional printer, based on surgical simulation. In the experimental group, a stereolithography (STL) surgical guide template, based on simulation, was used for cutting the fibula graft. In the control group, the fibula graft was cut manually, with reference to the reconstructed RP mandible model. The mandibular reconstructions were compared to the surgical simulation, and errors were calculated for both the STL surgical guide and the manual methods. The average differences in three-dimensional, minimum distances between the reconstruction and simulation were 9.87 ± 6.32 mm (mean ± SD) for the STL surgical guide method and 14.76 ± 10.34 mm (mean ± SD) for the manual method. The STL surgical guide method incurred less error than the manual method in mandibular reconstruction. A fibula cutting guide improved the precision of reconstructing the mandible with a fibula graft.

  9. Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography

    NASA Astrophysics Data System (ADS)

    Ghaffari Razin, Mir Reza; Voosoghi, Behzad

    2016-08-01

    Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.

  10. Target 3-D reconstruction of streak tube imaging lidar based on Gaussian fitting

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyu; Niu, Lihong; Hu, Cuichun; Wu, Lei; Yang, Hongru; Yu, Bing

    2018-02-01

    Streak images obtained by the streak tube imaging lidar (STIL) contain the distance-azimuth-intensity information of a scanned target, and a 3-D reconstruction of the target can be carried out through extracting the characteristic data of multiple streak images. Significant errors will be caused in the reconstruction result by the peak detection method due to noise and other factors. So as to get a more precise 3-D reconstruction, a peak detection method based on Gaussian fitting of trust region is proposed in this work. Gaussian modeling is performed on the returned wave of single time channel of each frame, then the modeling result which can effectively reduce the noise interference and possesses a unique peak could be taken as the new returned waveform, lastly extracting its feature data through peak detection. The experimental data of aerial target is for verifying this method. This work shows that the peak detection method based on Gaussian fitting reduces the extraction error of the feature data to less than 10%; utilizing this method to extract the feature data and reconstruct the target make it possible to realize the spatial resolution with a minimum 30 cm in the depth direction, and improve the 3-D imaging accuracy of the STIL concurrently.

  11. Automated Track Recognition and Event Reconstruction in Nuclear Emulsion

    NASA Technical Reports Server (NTRS)

    Deines-Jones, P.; Cherry, M. L.; Dabrowska, A.; Holynski, R.; Jones, W. V.; Kolganova, E. D.; Kudzia, D.; Nilsen, B. S.; Olszewski, A.; Pozharova, E. A.; hide

    1998-01-01

    The major advantages of nuclear emulsion for detecting charged particles are its submicron position resolution and sensitivity to minimum ionizing particles. These must be balanced, however, against the difficult manual microscope measurement by skilled observers required for the analysis. We have developed an automated system to acquire and analyze the microscope images from emulsion chambers. Each emulsion plate is analyzed independently, allowing coincidence techniques to be used in order to reject back- ground and estimate error rates. The system has been used to analyze a sample of high-multiplicity Pb-Pb interactions (charged particle multiplicities approx. 1100) produced by the 158 GeV/c per nucleon Pb-208 beam at CERN. Automatically reconstructed track lists agree with our best manual measurements to 3%. We describe the image analysis and track reconstruction techniques, and discuss the measurement and reconstruction uncertainties.

  12. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  13. Estimating missing daily temperature extremes in Jaffna, Sri Lanka

    NASA Astrophysics Data System (ADS)

    Thevakaran, A.; Sonnadara, D. U. J.

    2018-04-01

    The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.

  14. Tomography of a displacement photon counter for discrimination of single-rail optical qubits

    NASA Astrophysics Data System (ADS)

    Izumi, Shuro; Neergaard-Nielsen, Jonas S.; Andersen, Ulrik L.

    2018-04-01

    We investigate the performance of a detection strategy composed of a displacement operation and a photon counter, which is known as a beneficial tool in optical coherent communications, to the quantum state discrimination of the two superpositions of vacuum and single photon states corresponding to the {\\hat{σ }}x eigenstates in the single-rail encoding of photonic qubits. We experimentally characterize the detection strategy in vacuum-single photon two-dimensional space using quantum detector tomography and evaluate the achievable discrimination error probability from the reconstructed measurement operators. We furthermore derive the minimum error rate obtainable with Gaussian transformations and homodyne detection. Our proof-of-principle experiment shows that the proposed scheme can achieve a discrimination error surpassing homodyne detection.

  15. Honey bee-inspired algorithms for SNP haplotype reconstruction problem

    NASA Astrophysics Data System (ADS)

    PourkamaliAnaraki, Maryam; Sadeghi, Mehdi

    2016-03-01

    Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/

  16. Optimization of Trade-offs in Error-free Image Transmission

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R.; Moore, Stephen M.; Blaine, G. James; Zimmerman, John B.; Wallace, Gregory K.

    1989-05-01

    The availability of ubiquitous wide-area channels of both modest cost and higher transmission rate than voice-grade lines promises to allow the expansion of electronic radiology services to a larger community. The band-widths of the new services becoming available from the Integrated Services Digital Network (ISDN) are typically limited to 128 Kb/s, almost two orders of magnitude lower than popular LANs can support. Using Discrete Cosine Transform (DCT) techniques, a compressed approximation to an image may be rapidly transmitted. However, intensity or resampling transformations of the reconstructed image may reveal otherwise invisible artifacts of the approximate encoding. A progressive transmission scheme reported in ISO Working Paper N800 offers an attractive solution to this problem by rapidly reconstructing an apparently undistorted image from the DCT coefficients and then subse-quently transmitting the error image corresponding to the difference between the original and the reconstructed images. This approach achieves an error-free transmission without sacrificing the perception of rapid image delivery. Furthermore, subsequent intensity and resampling manipulations can be carried out with confidence. DCT coefficient precision affects the amount of error information that must be transmitted and, hence the delivery speed of error-free images. This study calculates the overall information coding rate for six radiographic images as a function of DCT coefficient precision. The results demonstrate that a minimum occurs for each of the six images at an average coefficient precision of between 0.5 and 1.0 bits per pixel (b/p). Apparently undistorted versions of these six images can be transmitted with a coding rate of between 0.25 and 0.75 b/p while error-free versions can be transmitted with an overall coding rate between 4.5 and 6.5 b/p.

  17. Analysis of Ablative Performance of C/C Composite Throat Containing Defects Based on X-ray 3D Reconstruction in a Solid Rocket Motor

    NASA Astrophysics Data System (ADS)

    Hui, Wei-Hua; Bao, Fu-Ting; Wei, Xiang-Geng; Liu, Yang

    2015-12-01

    In this paper, a new measuring method of ablation rate was proposed based on X-ray three-dimensional (3D) reconstruction. The ablation of 4-direction carbon/carbon composite nozzles was investigated in the combustion environment of a solid rocket motor, and the macroscopic ablation and linear recession rate were studied through the X-ray 3D reconstruction method. The results showed that the maximum relative error of the X-ray 3D reconstruction was 0.0576%, which met the minimum accuracy of the ablation analysis; along the nozzle axial direction, from convergence segment, throat to expansion segment, the ablation gradually weakened; in terms of defect ablation, the middle ablation was weak, while the ablation in both sides was more serious. In a word, the proposed reconstruction method based on X-ray about C/C nozzle ablation can construct a clear model of ablative nozzle which characterizes the details about micro-cracks, deposition, pores and surface to analyze ablation, so that this method can create the ablation curve in any surface clearly.

  18. Using traveling salesman problem algorithms for evolutionary tree construction.

    PubMed

    Korostensky, C; Gonnet, G H

    2000-07-01

    The construction of evolutionary trees is one of the major problems in computational biology, mainly due to its complexity. We present a new tree construction method that constructs a tree with minimum score for a given set of sequences, where the score is the amount of evolution measured in PAM distances. To do this, the problem of tree construction is reduced to the Traveling Salesman Problem (TSP). The input for the TSP algorithm are the pairwise distances of the sequences and the output is a circular tour through the optimal, unknown tree plus the minimum score of the tree. The circular order and the score can be used to construct the topology of the optimal tree. Our method can be used for any scoring function that correlates to the amount of changes along the branches of an evolutionary tree, for instance it could also be used for parsimony scores, but it cannot be used for least squares fit of distances. A TSP solution reduces the space of all possible trees to 2n. Using this order, we can guarantee that we reconstruct a correct evolutionary tree if the absolute value of the error for each distance measurement is smaller than f2.gif" BORDER="0">, where f3.gif" BORDER="0">is the length of the shortest edge in the tree. For data sets with large errors, a dynamic programming approach is used to reconstruct the tree. Finally simulations and experiments with real data are shown.

  19. An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.

    PubMed

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-09-01

    Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.

  20. Bracketing mid-pliocene sea surface temperature: maximum and minimum possible warming

    USGS Publications Warehouse

    Dowsett, Harry

    2004-01-01

    Estimates of sea surface temperature (SST) from ocean cores reveal a warm phase of the Pliocene between about 3.3 and 3.0 Mega-annums (Ma). Pollen records from land based cores and sections, although not as well dated, also show evidence for a warmer climate at about the same time. Increased greenhouse forcing and altered ocean heat transport is the leading candidates for the underlying cause of Pliocene global warmth. However, despite being a period of global warmth, there exists considerable variability within this interval. Two new SST reconstructions have been created to provide a climatological error bar for warm peak phases of the Pliocene. These data represent the maximum and minimum possible warming recorded within the 3.3 to 3.0 Ma interval.

  1. Eye micromotions influence on an error of Zernike coefficients reconstruction in the one-ray refractometry of an eye

    NASA Astrophysics Data System (ADS)

    Osipova, Irina Y.; Chyzh, Igor H.

    2001-06-01

    The influence of eye jumps on the accuracy of estimation of Zernike coefficients from eye transverse aberration measurements was investigated. By computer modeling the ametropy and astigmatism have been examined. The standard deviation of the wave aberration function was calculated. It was determined that the standard deviation of the wave aberration function achieves the minimum value if the number of scanning points is equal to the number of eye jumps in scanning period. The recommendations for duration of measurement were worked out.

  2. Online C-arm calibration using a marked guide wire for 3D reconstruction of pulmonary arteries

    NASA Astrophysics Data System (ADS)

    Vachon, Étienne; Miró, Joaquim; Duong, Luc

    2017-03-01

    3D reconstruction of vessels from 2D X-ray angiography is highly relevant to improve the visualization and the assessment of vascular structures such as pulmonary arteries by interventional cardiologists. However, to ensure a robust and accurate reconstruction, C-arm gantry parameters must be properly calibrated to provide clinically acceptable results. Calibration procedures often rely on calibration objects and complex protocol which is not adapted to an intervention context. In this study, a novel calibration algorithm for C-arm gantry is presented using the instrumentation such as catheters and guide wire. This ensures the availability of a minimum set of correspondences and implies minimal changes to the clinical workflow. The method was evaluated on simulated data and on retrospective patient datasets. Experimental results on simulated datasets demonstrate a calibration that allows a 3D reconstruction of the guide wire up to a geometric transformation. Experiments with patients datasets show a significant decrease of the retro projection error to 0.17 mm 2D RMS. Consequently, such procedure might contribute to identify any calibration drift during the intervention.

  3. Wearable sensors for patient-specific boundary shape estimation to improve the forward model for electrical impedance tomography (EIT) of neonatal lung function.

    PubMed

    Khor, Joo Moy; Tizzard, Andrew; Demosthenous, Andreas; Bayford, Richard

    2014-06-01

    Electrical impedance tomography (EIT) could be significantly advantageous to continuous monitoring of lung development in newborn and, in particular, preterm infants as it is non-invasive and safe to use within the intensive care unit. It has been demonstrated that accurate boundary form of the forward model is important to minimize artefacts in reconstructed electrical impedance images. This paper presents the outcomes of initial investigations for acquiring patient-specific thorax boundary information using a network of flexible sensors that imposes no restrictions on the patient's normal breathing and movements. The investigations include: (1) description of the basis of the reconstruction algorithms, (2) tests to determine a minimum number of bend sensors, (3) validation of two approaches to reconstruction and (4) an example of a commercially available bend sensor and its performance. Simulation results using ideal sensors show that, in the worst case, a total shape error of less than 6% with respect to its total perimeter can be achieved.

  4. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    NASA Astrophysics Data System (ADS)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  5. Decadal-scale autumn temperature reconstruction back to AD 1580 inferred from the varved sediments of Lake Silvaplana (southeastern Swiss Alps)

    NASA Astrophysics Data System (ADS)

    Blass, Alex; Bigler, Christian; Grosjean, Martin; Sturm, Michael

    2007-09-01

    A quantitative high-resolution autumn (September-November) temperature reconstruction for the southeastern Swiss Alps back to AD 1580 is presented here. We used the annually resolved biogenic silica (diatoms) flux derived from the accurately dated and annually sampled sediments of Lake Silvaplana (46°27'N, 9°48'E, 1800 m a.s.l.). The biogenic silica flux smoothed by means of a 9-yr running mean was calibrated ( r = 0.70, p < 0.01) against local instrumental temperature data (AD 1864-1949). The resulting reconstruction (± 2 standard errors = ± 0.7 °C) indicates that autumns during the late Little Ice Age were generally cooler than they were during the 20th century. During the cold anomaly around AD 1600 and during the Maunder Minimum, however, the reconstructed autumn temperatures did not experience strong negative departures from the 20th-century mean. The warmest autumns prior to 1900 occurred around AD 1770 and 1820 (0.75 °C above the 20th-century mean). Our data agree closely with two other autumn temperature reconstructions for the Alps and for Europe that are based on documentary evidence and are completely unrelated to our data, revealing a very consistent picture over the centuries.

  6. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  7. Tomographic capabilities of the new GEM based SXR diagnostic of WEST

    NASA Astrophysics Data System (ADS)

    Jardin, A.; Mazon, D.; O'Mullane, M.; Mlynar, J.; Loffelmann, V.; Imrisek, M.; Chernyshova, M.; Czarski, T.; Kasprowicz, G.; Wojenski, A.; Bourdelle, C.; Malard, P.

    2016-07-01

    The tokamak WEST (Tungsten Environment in Steady-State Tokamak) will start operating by the end of 2016 as a test bed for the ITER divertor components in long pulse operation. In this context, radiative cooling of heavy impurities like tungsten (W) in the Soft X-ray (SXR) range [0.1 keV; 20 keV] is a critical issue for the plasma core performances. Thus reliable tools are required to monitor the local impurity density and avoid W accumulation. The WEST SXR diagnostic will be equipped with two new GEM (Gas Electron Multiplier) based poloidal cameras allowing to perform 2D tomographic reconstructions in tunable energy bands. In this paper tomographic capabilities of the Minimum Fisher Information (MFI) algorithm developed for Tore Supra and upgraded for WEST are investigated, in particular through a set of emissivity phantoms and the standard WEST scenario including reconstruction errors, influence of noise as well as computational time.

  8. Time-of-flight PET image reconstruction using origin ensembles.

    PubMed

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-07

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  9. Time-of-flight PET image reconstruction using origin ensembles

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  10. Trajectory Reconstruction and Uncertainty Analysis Using Mars Science Laboratory Pre-Flight Scale Model Aeroballistic Testing

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark

    2013-01-01

    As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.

  11. Refraction corrected calibration for aquatic locomotion research: application of Snell's law improves spatial accuracy.

    PubMed

    Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L

    2015-07-07

    Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.

  12. Non-overlap subaperture interferometric testing for large optics

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Yu, Yingjie; Zeng, Wenhan; Qi, Te; Chen, Mingyi; Jiang, Xiangqian

    2017-08-01

    It has been shown that the number of subapertures and the amount of overlap has a significant influence on the stitching accuracy. In this paper, a non-overlap subaperture interferometric testing method (NOSAI) is proposed to inspect large optical components. This method would greatly reduce the number of subapertures and the influence of environmental interference while maintaining the accuracy of reconstruction. A general subaperture distribution pattern of NOSAI is also proposed for the large rectangle surface. The square Zernike polynomial is employed to fit such wavefront. The effect of the minimum fitting terms on the accuracy of NOSAI and the sensitivities of NOSAI to subaperture's alignment error, power systematic error, and random noise are discussed. Experimental results validate the feasibility and accuracy of the proposed NOSAI in comparison with wavefront obtained by a large aperture interferometer and stitching surface by multi-aperture overlap-scanning technique (MAOST).

  13. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and reasonably low dimension error ratio. Again proving the applicability of the algorithm.

  14. Palinspastic reconstruction of structure maps: an automated finite element approach with heterogeneous strain

    NASA Astrophysics Data System (ADS)

    Dunbar, John A.; Cook, Richard W.

    2003-07-01

    Existing methods for the palinspastic reconstruction of structure maps do not adequately account for heterogeneous rock strain and hence cannot accurately treat features such as fault terminations and non-cylindrical folds. We propose a new finite element formulation of the map reconstruction problem that treats such features explicitly. In this approach, a model of the map surface, with internal openings that honor the topology of the fault-gap network, is constructed of triangular finite elements. Both model building and reconstruction algorithms are guided by rules relating fault-gap topology to the kinematics of fault motion and are fully automated. We represent the total strain as the sum of a prescribed component of locally homogeneous simple shear and a minimum amount of heterogeneous residual strain. The region within which a particular orientation of simple shear is treated as homogenous can be as small as an individual element or as large as the entire map. For residual strain calculations, we treat the map surface as a hyperelastic membrane. A globally optimum reconstruction is found that unfolds the map while faithfully honoring assigned strain mechanisms, closes fault gaps without overlap or gap and imparts the least possible residual strain in the restored surface. The amount and distribution of the residual strain serves as a diagnostic tool for identifying mapping errors. The method can be used to reconstruct maps offset by any number of faults that terminate, branch and offset each other in arbitrarily complex ways.

  15. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    NASA Astrophysics Data System (ADS)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  16. Reconstruction of regional mean temperature for East Asia since 1900s and its uncertainties

    NASA Astrophysics Data System (ADS)

    Hua, W.

    2017-12-01

    Regional average surface air temperature (SAT) is one of the key variables often used to investigate climate change. Unfortunately, because of the limited observations over East Asia, there were also some gaps in the observation data sampling for regional mean SAT analysis, which was important to estimate past climate change. In this study, the regional average temperature of East Asia since 1900s is calculated by the Empirical Orthogonal Function (EOF)-based optimal interpolation (OA) method with considering the data errors. The results show that our estimate is more precise and robust than the results from simple average, which provides a better way for past climate reconstruction. In addition to the reconstructed regional average SAT anomaly time series, we also estimated uncertainties of reconstruction. The root mean square error (RMSE) results show that the the error decreases with respect to time, and are not sufficiently large to alter the conclusions on the persist warming in East Asia during twenty-first century. Moreover, the test of influence of data error on reconstruction clearly shows the sensitivity of reconstruction to the size of the data error.

  17. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  18. Accounting for hardware imperfections in EIT image reconstruction algorithms.

    PubMed

    Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.

  19. Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.

    PubMed

    Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing

    2009-08-21

    Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.

  20. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  1. Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Waller, Laura

    2017-05-01

    Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.

  2. Stochastic Surface Mesh Reconstruction

    NASA Astrophysics Data System (ADS)

    Ozendi, M.; Akca, D.; Topan, H.

    2018-05-01

    A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

  3. The one hundredth year of Rudolf Wolf's death: Do we have the correct reconstruction of solar activity?

    NASA Technical Reports Server (NTRS)

    Hoyt, Douglas V.; Schatten, Kenneth H.; Nesmes-Ribes, Elizabeth

    1994-01-01

    In the one hundred years since Wolf died, little effort has gone into research to see if improved reconstructions of sunspot numbers can be made. We have gathered more than 349,000 observations of daily sunspot group counts from more than 350 observers active from 1610 to 1993. Based upon group counts alone, it is possible to make an objective and homogeneous reconstruction of sunspot numbers. From our study, it appears that the Sun has steadily increased in activity since 1700 with the exception of a brief decrease in the Dalton Minimum (1795-1823). The significant results here are the greater depth of the Dalton Minimum, the generally lower activity throughout the 1700's, and the gradual rise in activity from the Maunder Minimum to the present day. This solar activity reconstruction is quite similar to those Wolf published before 1868 rather than the revised Wolf reconstructions after 1873 which used geomagnetic fluctuations.

  4. A recursive algorithm for the three-dimensional imaging of brain electric activity: Shrinking LORETA-FOCUSS.

    PubMed

    Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai

    2004-10-01

    Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.

  5. Robustness of speckle imaging techniques applied to horizontal imaging scenarios

    NASA Astrophysics Data System (ADS)

    Bos, Jeremy P.

    Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction to improve the quality of imagery available to operators. To be effective, these systems must operate over significant variations in turbulence conditions while also subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition to robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods are one of a variety of methods recently been proposed for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. This performance evaluation is made possible using a novel technique for simulating anisoplanatic image formation. I find that incorporate as few as 15 image frames and 4 estimates of the object phase per reconstructed frame provide an average reduction of 45% reduction in Mean Squared Error (MSE) and 68% reduction in deviation in MSE. In addition, the Knox-Thompson phase recovery method is demonstrated to produce images in half the time required by the bispectrum. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate reconstruction quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.

  6. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  7. De-noising of 3D multiple-coil MR images using modified LMMSE estimator.

    PubMed

    Yaghoobi, Nima; Hasanzadeh, Reza P R

    2018-06-20

    De-noising is a crucial topic in Magnetic Resonance Imaging (MRI) which focuses on less loss of Magnetic Resonance (MR) image information and details preservation during the noise suppression. Nowadays multiple-coil MRI system is preferred to single one due to its acceleration in the imaging process. Due to the fact that the model of noise in single-coil and multiple-coil MRI systems are different, the de-noising methods that mostly are adapted to single-coil MRI systems, do not work appropriately with multiple-coil one. The model of noise in single-coil MRI systems is Rician while in multiple-coil one (if no subsampling occurs in k-space or GRAPPA reconstruction process is being done in the coils), it obeys noncentral Chi (nc-χ). In this paper, a new filtering method based on the Linear Minimum Mean Square Error (LMMSE) estimator is proposed for multiple-coil MR Images ruined by nc-χ noise. In the presented method, to have an optimum similarity selection of voxels, the Bayesian Mean Square Error (BMSE) criterion is used and proved for nc-χ noise model and also a nonlocal voxel selection methodology is proposed for nc-χ distribution. The results illustrate robust and accurate performance compared to the related state-of-the-art methods, either on ideal nc-χ images or GRAPPA reconstructed ones. Copyright © 2018. Published by Elsevier Inc.

  8. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.

    PubMed

    Li, Jielin; Hassebrook, Laurence G; Guan, Chun

    2003-01-01

    Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.

  9. Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects.

    PubMed

    Meng, Yuanzheng; Gong, Hui; Yang, Xiaoquan

    2013-02-01

    A novel online method based on the symmetry property of the sum of projections (SOP) is proposed to obtain the geometric parameters in cone-beam computed tomography (CBCT). This method requires no calibration phantom and can be used in circular trajectory CBCT with arbitrary cone angles. An objective function is deduced to illustrate the dependence of the symmetry of SOP on geometric parameters, which will converge to its minimum when the geometric parameters achieve their true values. Thus, by minimizing the objective function, we can obtain the geometric parameters for image reconstruction. To validate this method, numerical phantom studies with different noise levels are simulated. The results show that our method is insensitive to the noise and can determine the skew (in-plane rotation angle of the detector), the roll (rotation angle around the projection of the rotation axis on the detector), and the rotation axis with high accuracy, while the mid-plane and source-to-detector distance will be obtained with slightly lower accuracy. However, our simulation studies validate that the errors of the latter two parameters brought by our method will hardly degrade the quality of reconstructed images. The small animal studies show that our method is able to deal with arbitrary imaging objects. In addition, the results of the reconstructed images in different slices demonstrate that we have achieved comparable image quality in the reconstructions as some offline methods.

  10. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  11. New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.

    PubMed

    Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María

    2017-08-01

    In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.

  12. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Jon

    2009-06-15

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  13. Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.

    PubMed

    Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth

    2016-06-01

    Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.

  14. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data.

    PubMed

    Kotasidis, F A; Mehranian, A; Zaidi, H

    2016-05-07

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  15. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Mehranian, A.; Zaidi, H.

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  16. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  17. Implications of Liebig’s law of the minimum for tree-ring reconstructions of climate

    NASA Astrophysics Data System (ADS)

    Stine, A. R.; Huybers, P.

    2017-11-01

    A basic principle of ecology, known as Liebig’s Law of the Minimum, is that plant growth reflects the strongest limiting environmental factor. This principle implies that a limiting environmental factor can be inferred from historical growth and, in dendrochronology, such reconstruction is generally achieved by averaging collections of standardized tree-ring records. Averaging is optimal if growth reflects a single limiting factor and noise but not if growth also reflects locally variable stresses that intermittently limit growth. In this study a collection of Arctic tree ring records is shown to follow scaling relationships that are inconsistent with the signal-plus-noise model of tree growth but consistent with Liebig’s Law acting at the local level. Also consistent with law-of-the-minimum behavior is that reconstructions based on the least-stressed trees in a given year better-follow variations in temperature than typical approaches where all tree-ring records are averaged. Improvements in reconstruction skill occur across all frequencies, with the greatest increase at the lowest frequencies. More comprehensive statistical-ecological models of tree growth may offer further improvement in reconstruction skill.

  18. Spectral CT of the extremities with a silicon strip photon counting detector

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.

  19. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  20. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  1. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  2. Improved electromagnetic tracking for catheter path reconstruction with application in high-dose-rate brachytherapy.

    PubMed

    Lugez, Elodie; Sadjadi, Hossein; Joshi, Chandra P; Akl, Selim G; Fichtinger, Gabor

    2017-04-01

    Electromagnetic (EM) catheter tracking has recently been introduced in order to enable prompt and uncomplicated reconstruction of catheter paths in various clinical interventions. However, EM tracking is prone to measurement errors which can compromise the outcome of the procedure. Minimizing catheter tracking errors is therefore paramount to improve the path reconstruction accuracy. An extended Kalman filter (EKF) was employed to combine the nonlinear kinematic model of an EM sensor inside the catheter, with both its position and orientation measurements. The formulation of the kinematic model was based on the nonholonomic motion constraints of the EM sensor inside the catheter. Experimental verification was carried out in a clinical HDR suite. Ten catheters were inserted with mean curvatures varying from 0 to [Formula: see text] in a phantom. A miniaturized Ascension (Burlington, Vermont, USA) trakSTAR EM sensor (model 55) was threaded within each catheter at various speeds ranging from 7.4 to [Formula: see text]. The nonholonomic EKF was applied on the tracking data in order to statistically improve the EM tracking accuracy. A sample reconstruction error was defined at each point as the Euclidean distance between the estimated EM measurement and its corresponding ground truth. A path reconstruction accuracy was defined as the root mean square of the sample reconstruction errors, while the path reconstruction precision was defined as the standard deviation of these sample reconstruction errors. The impacts of sensor velocity and path curvature on the nonholonomic EKF method were determined. Finally, the nonholonomic EKF catheter path reconstructions were compared with the reconstructions provided by the manufacturer's filters under default settings, namely the AC wide notch and the DC adaptive filter. With a path reconstruction accuracy of 1.9 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (2.4 mm) by 21% and the raw EM measurements (3.5 mm) by 46%. Similarly, with a path reconstruction precision of 0.8 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (1.0 mm) by 20% and the raw EM measurements (1.7 mm) by 53%. Path reconstruction accuracies did not follow an apparent trend when varying the path curvature and sensor velocity; instead, reconstruction accuracies were predominantly impacted by the position of the EM field transmitter ([Formula: see text]). The advanced nonholonomic EKF is effective in reducing EM measurement errors when reconstructing catheter paths, is robust to path curvature and sensor speed, and runs in real time. Our approach is promising for a plurality of clinical procedures requiring catheter reconstructions, such as cardiovascular interventions, pulmonary applications (Bender et al. in medical image computing and computer-assisted intervention-MICCAI 99. Springer, Berlin, pp 981-989, 1999), and brachytherapy.

  3. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  4. Accelerating Advanced MRI Reconstructions on GPUs

    PubMed Central

    Stone, S.S.; Haldar, J.P.; Tsao, S.C.; Hwu, W.-m.W.; Sutton, B.P.; Liang, Z.-P.

    2008-01-01

    Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA’s Quadro FX 5600. The reconstruction of a 3D image with 1283 voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%. PMID:21796230

  5. Accelerating Advanced MRI Reconstructions on GPUs.

    PubMed

    Stone, S S; Haldar, J P; Tsao, S C; Hwu, W-M W; Sutton, B P; Liang, Z-P

    2008-10-01

    Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA's Quadro FX 5600. The reconstruction of a 3D image with 128(3) voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%.

  6. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  7. A novel multi-planar radiography method for three dimensional pose reconstruction of the patellofemoral and tibiofemoral joints after arthroplasty.

    PubMed

    Amiri, Shahram; Wilson, David R; Masri, Bassam A; Sharma, Gulshan; Anglin, Carolyn

    2011-06-03

    Determining the 3D pose of the patella after total knee arthroplasty is challenging. The commonly used single-plane fluoroscopy is prone to large errors in the clinically relevant mediolateral direction. A conventional fixed bi-planar setup is limited in the minimum angular distance between the imaging planes necessary for visualizing the patellar component, and requires a highly flexible setup to adjust for the subject-specific geometries. As an alternative solution, this study investigated the use of a novel multi-planar imaging setup that consists of a C-arm tracked by an external optoelectric tracking system, to acquire calibrated radiographs from multiple orientations. To determine the accuracies, a knee prosthesis was implanted on artificial bones and imaged in simulated 'Supine' and 'Weightbearing' configurations. The results were compared with measures from a coordinate measuring machine as the ground-truth reference. The weightbearing configuration was the preferred imaging direction with RMS errors of 0.48 mm and 1.32 ° for mediolateral shift and tilt of the patella, respectively, the two most clinically relevant measures. The 'imaging accuracies' of the system, defined as the accuracies in 3D reconstruction of a cylindrical ball bearing phantom (so as to avoid the influence of the shape and orientation of the imaging object), showed an order of magnitude (11.5 times) reduction in the out-of-plane RMS errors in comparison to single-plane fluoroscopy. With this new method, complete 3D pose of the patellofemoral and tibiofemoral joints during quasi-static activities can be determined with a many-fold (up to 8 times) (3.4mm) improvement in the out-of-plane accuracies compared to a conventional single-plane fluoroscopy setup. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Method and Apparatus for Powered Descent Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)

    2013-01-01

    A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.

  9. Wind adaptive modeling of transmission lines using minimum description length

    NASA Astrophysics Data System (ADS)

    Jaw, Yoonseok; Sohn, Gunho

    2017-03-01

    The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.

  10. Integrating prior information into microwave tomography part 2: Impact of errors in prior information on microwave tomography image quality.

    PubMed

    Kurrant, Douglas; Fear, Elise; Baran, Anastasia; LoVetri, Joe

    2017-12-01

    The authors have developed a method to combine a patient-specific map of tissue structure and average dielectric properties with microwave tomography. The patient-specific map is acquired with radar-based techniques and serves as prior information for microwave tomography. The impact that the degree of structural detail included in this prior information has on image quality was reported in a previous investigation. The aim of the present study is to extend this previous work by identifying and quantifying the impact that errors in the prior information have on image quality, including the reconstruction of internal structures and lesions embedded in fibroglandular tissue. This study also extends the work of others reported in literature by emulating a clinical setting with a set of experiments that incorporate heterogeneity into both the breast interior and glandular region, as well as prior information related to both fat and glandular structures. Patient-specific structural information is acquired using radar-based methods that form a regional map of the breast. Errors are introduced to create a discrepancy in the geometry and electrical properties between the regional map and the model used to generate the data. This permits the impact that errors in the prior information have on image quality to be evaluated. Image quality is quantitatively assessed by measuring the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The study is conducted using both 2D and 3D numerical breast models constructed from MRI scans. The reconstruction results demonstrate robustness of the method relative to errors in the dielectric properties of the background regional map, and to misalignment errors. These errors do not significantly influence the reconstruction accuracy of the underlying structures, or the ability of the algorithm to reconstruct malignant tissue. Although misalignment errors do not significantly impact the quality of the reconstructed fat and glandular structures for the 3D scenarios, the dielectric properties are reconstructed less accurately within the glandular structure for these cases relative to the 2D cases. However, general agreement between the 2D and 3D results was found. A key contribution of this paper is the detailed analysis of the impact of prior information errors on the reconstruction accuracy and ability to detect tumors. The results support the utility of acquiring patient-specific information with radar-based techniques and incorporating this information into MWT. The method is robust to errors in the dielectric properties of the background regional map, and to misalignment errors. Completion of this analysis is an important step toward developing the method into a practical diagnostic tool. © 2017 American Association of Physicists in Medicine.

  11. Optimal multi-type sensor placement for response and excitation reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, C. D.; Xu, Y. L.

    2016-01-01

    The need to perform dynamic response reconstruction always arises as the measurement of structural response is often limited to a few locations, especially for a large civil structure. Besides, it is usually very difficult, if not impossible, to measure external excitations under the operation condition of a structure. This study presents an algorithm for optimal placement of multi-type sensors, including strain gauges, displacement transducers and accelerometers, for the best reconstruction of responses of key structural components where there are no sensors installed and the best estimation of external excitations acting on the structure at the same time. The algorithm is developed in the framework of Kalman filter with unknown excitation, in which minimum-variance unbiased estimates of the generalized state of the structure and the external excitations are obtained by virtue of limited sensor measurements. The structural responses of key locations without sensors can then be reconstructed with the estimated generalized state and excitation. The asymptotic stability feature of the filter is utilized for optimal sensor placement. The number and spatial location of the multi-type sensors are determined by adding the optimal sensor which gains the maximal reduction of the estimation error of reconstructed responses. For the given mode number in response reconstruction and the given locations of external excitations, the optimal multi-sensor placement achieved by the proposed method is independent of the type and time evolution of external excitation. A simply-supported overhanging steel beam under multiple types of excitation is numerically studied to demonstrate the feasibility and superiority of the proposed method, and the experimental work is then carried out to testify the effectiveness of the proposed method.

  12. Alignment Solution for CT Image Reconstruction using Fixed Point and Virtual Rotation Axis.

    PubMed

    Jun, Kyungtaek; Yoon, Seokhwan

    2017-01-25

    Since X-ray tomography is now widely adopted in many different areas, it becomes more crucial to find a robust routine of handling tomographic data to get better quality of reconstructions. Though there are several existing techniques, it seems helpful to have a more automated method to remove the possible errors that hinder clearer image reconstruction. Here, we proposed an alternative method and new algorithm using the sinogram and the fixed point. An advanced physical concept of Center of Attenuation (CA) was also introduced to figure out how this fixed point is applied to the reconstruction of image having errors we categorized in this article. Our technique showed a promising performance in restoring images having translation and vertical tilt errors.

  13. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  14. M-AMST: an automatic 3D neuron tracing method based on mean shift and adapted minimum spanning tree.

    PubMed

    Wan, Zhijiang; He, Yishan; Hao, Ming; Yang, Jian; Zhong, Ning

    2017-03-29

    Understanding the working mechanism of the brain is one of the grandest challenges for modern science. Toward this end, the BigNeuron project was launched to gather a worldwide community to establish a big data resource and a set of the state-of-the-art of single neuron reconstruction algorithms. Many groups contributed their own algorithms for the project, including our mean shift and minimum spanning tree (M-MST). Although M-MST is intuitive and easy to implement, the MST just considers spatial information of single neuron and ignores the shape information, which might lead to less precise connections between some neuron segments. In this paper, we propose an improved algorithm, namely M-AMST, in which a rotating sphere model based on coordinate transformation is used to improve the weight calculation method in M-MST. Two experiments are designed to illustrate the effect of adapted minimum spanning tree algorithm and the adoptability of M-AMST in reconstructing variety of neuron image datasets respectively. In the experiment 1, taking the reconstruction of APP2 as reference, we produce the four difference scores (entire structure average (ESA), different structure average (DSA), percentage of different structure (PDS) and max distance of neurons' nodes (MDNN)) by comparing the neuron reconstruction of the APP2 and the other 5 competing algorithm. The result shows that M-AMST gets lower difference scores than M-MST in ESA, PDS and MDNN. Meanwhile, M-AMST is better than N-MST in ESA and MDNN. It indicates that utilizing the adapted minimum spanning tree algorithm which took the shape information of neuron into account can achieve better neuron reconstructions. In the experiment 2, 7 neuron image datasets are reconstructed and the four difference scores are calculated by comparing the gold standard reconstruction and the reconstructions produced by 6 competing algorithms. Comparing the four difference scores of M-AMST and the other 5 algorithm, we can conclude that M-AMST is able to achieve the best difference score in 3 datasets and get the second-best difference score in the other 2 datasets. We develop a pathway extraction method using a rotating sphere model based on coordinate transformation to improve the weight calculation approach in MST. The experimental results show that M-AMST utilizes the adapted minimum spanning tree algorithm which takes the shape information of neuron into account can achieve better neuron reconstructions. Moreover, M-AMST is able to get good neuron reconstruction in variety of image datasets.

  15. Automatic Recognition of Phonemes Using a Syntactic Processor for Error Correction.

    DTIC Science & Technology

    1980-12-01

    OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS AFIT/GE/EE/8D-45 Robert B. ’Taylor 2Lt USAF Approved for public release...distribution unlimilted. AbP AFIT/GE/EE/ 80D-45 AUTOMATIC RECOGNITION OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS Presented to the...Testing ..................... 37 Bayes Decision Rule for Minimum Error ........... 37 Bayes Decision Rule for Minimum Risk ............ 39 Mini Max Test

  16. Flip-avoiding interpolating surface registration for skull reconstruction.

    PubMed

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  17. The effects of center of rotation errors on cardiac SPECT imaging

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.

    2003-10-01

    In SPECT imaging, center of rotation (COR) errors lead to the misalignment of projection data and can potentially degrade the quality of the reconstructed images. In this work, we study the effects of COR errors on cardiac SPECT imaging using simulation, point source, cardiac phantom, and patient studies. For simulation studies, we generate projection data using a uniform MCAT phantom first without modeling any physical effects (NPH), then with the modeling of detector response effect (DR) alone. We then corrupt the projection data with simulated sinusoid and step COR errors. For other studies, we introduce sinusoid COR errors to projection data acquired on SPECT systems. An OSEM algorithm is used for image reconstruction without detector response correction, but with nonuniform attenuation correction when needed. The simulation studies show that, when COR errors increase from 0 to 0.96 cm: 1) sinusoid COR errors in axial direction lead to intensity decrease in the inferoapical region; 2) step COR errors in axial direction lead to intensity decrease in the distal anterior region. The intensity decrease is more severe in images reconstructed from projection data with NPH than with DR; and 3) the effects of COR errors in transaxial direction seem to be insignificant. In other studies, COR errors slightly degrade point source resolution; COR errors of 0.64 cm or above introduce visible but insignificant nonuniformity in the images of uniform cardiac phantom; COR errors up to 0.96 cm in transaxial direction affect the lesion-to-background contrast (LBC) insignificantly in the images of cardiac phantom with defects, and COR errors up to 0.64 cm in axial direction only slightly decrease the LBC. For the patient studies with COR errors up to 0.96 cm, images have the same diagnostic/prognostic values as those without COR errors. This work suggests that COR errors of up to 0.64 cm are not likely to change the clinical applications of cardiac SPECT imaging when using iterative reconstruction algorithm without detector response correction.

  18. Integrated Sachs-Wolfe map reconstruction in the presence of systematic errors

    NASA Astrophysics Data System (ADS)

    Weaverdyck, Noah; Muir, Jessica; Huterer, Dragan

    2018-02-01

    The decay of gravitational potentials in the presence of dark energy leads to an additional, late-time contribution to anisotropies in the cosmic microwave background (CMB) at large angular scales. The imprint of this so-called integrated Sachs-Wolfe (ISW) effect to the CMB angular power spectrum has been detected and studied in detail, but reconstructing its spatial contributions to the CMB map, which would offer the tantalizing possibility of separating the early- from the late-time contributions to CMB temperature fluctuations, is more challenging. Here, we study the technique for reconstructing the ISW map based on information from galaxy surveys and focus in particular on how its accuracy is impacted by the presence of photometric calibration errors in input galaxy maps, which were previously found to be a dominant contaminant for ISW signal estimation. We find that both including tomographic information from a single survey and using data from multiple, complementary galaxy surveys improve the reconstruction by mitigating the impact of spurious power contributions from calibration errors. A high-fidelity reconstruction further requires one to account for the contribution of calibration errors to the observed galaxy power spectrum in the model used to construct the ISW estimator. We find that if the photometric calibration errors in galaxy surveys can be independently controlled at the level required to obtain unbiased dark energy constraints, then it is possible to reconstruct ISW maps with excellent accuracy using a combination of maps from two galaxy surveys with properties similar to Euclid and SPHEREx.

  19. Towards a novel look on low-frequency climate reconstructions

    NASA Astrophysics Data System (ADS)

    Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti

    2010-05-01

    Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.

  20. Quantifying the tibiofemoral joint space using x-ray tomosynthesis.

    PubMed

    Kalinosky, Benjamin; Sabol, John M; Piacsek, Kelly; Heckel, Beth; Gilat Schmidt, Taly

    2011-12-01

    Digital x-ray tomosynthesis (DTS) has the potential to provide 3D information about the knee joint in a load-bearing posture, which may improve diagnosis and monitoring of knee osteoarthritis compared with projection radiography, the current standard of care. Manually quantifying and visualizing the joint space width (JSW) from 3D tomosynthesis datasets may be challenging. This work developed a semiautomated algorithm for quantifying the 3D tibiofemoral JSW from reconstructed DTS images. The algorithm was validated through anthropomorphic phantom experiments and applied to three clinical datasets. A user-selected volume of interest within the reconstructed DTS volume was enhanced with 1D multiscale gradient kernels. The edge-enhanced volumes were divided by polarity into tibial and femoral edge maps and combined across kernel scales. A 2D connected components algorithm was performed to determine candidate tibial and femoral edges. A 2D joint space width map (JSW) was constructed to represent the 3D tibiofemoral joint space. To quantify the algorithm accuracy, an adjustable knee phantom was constructed, and eleven posterior-anterior (PA) and lateral DTS scans were acquired with the medial minimum JSW of the phantom set to 0-5 mm in 0.5 mm increments (VolumeRad™, GE Healthcare, Chalfont St. Giles, United Kingdom). The accuracy of the algorithm was quantified by comparing the minimum JSW in a region of interest in the medial compartment of the JSW map to the measured phantom setting for each trial. In addition, the algorithm was applied to DTS scans of a static knee phantom and the JSW map compared to values estimated from a manually segmented computed tomography (CT) dataset. The algorithm was also applied to three clinical DTS datasets of osteoarthritic patients. The algorithm segmented the JSW and generated a JSW map for all phantom and clinical datasets. For the adjustable phantom, the estimated minimum JSW values were plotted against the measured values for all trials. A linear fit estimated a slope of 0.887 (R² = 0.962) and a mean error across all trials of 0.34 mm for the PA phantom data. The estimated minimum JSW values for the lateral adjustable phantom acquisitions were found to have low correlation to the measured values (R² = 0.377), with a mean error of 2.13 mm. The error in the lateral adjustable-phantom datasets appeared to be caused by artifacts due to unrealistic features in the phantom bones. JSW maps generated by DTS and CT varied by a mean of 0.6 mm and 0.8 mm across the knee joint, for PA and lateral scans. The tibial and femoral edges were successfully segmented and JSW maps determined for PA and lateral clinical DTS datasets. A semiautomated method is presented for quantifying the 3D joint space in a 2D JSW map using tomosynthesis images. The proposed algorithm quantified the JSW across the knee joint to sub-millimeter accuracy for PA tomosynthesis acquisitions. Overall, the results suggest that x-ray tomosynthesis may be beneficial for diagnosing and monitoring disease progression or treatment of osteoarthritis by providing quantitative images of JSW in the load-bearing knee.

  1. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    NASA Astrophysics Data System (ADS)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  2. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system.

    PubMed

    Poulin, Eric; Racine, Emmanuel; Binnekamp, Dirk; Beaulieu, Luc

    2015-03-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora(®) Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  3. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc, E-mail: Luc.Beaulieu@phy.ulaval.ca

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position andmore » orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.« less

  4. Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.

    PubMed

    Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D

    2018-04-07

    We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Winter climatic condictions in Andalusia (southern Spain) during the Dalton Minimum from documentary sources.

    NASA Astrophysics Data System (ADS)

    Rodrigo, Fernando S.

    2010-05-01

    In this work, a reconstruction of winter rainfall and temperature in Andalusia (southern Iberia Peninsula) during the period 1750-1850 is presented. The reconstruction is based on the analysis of a wide variety of documentary data. This period is interesting because it is characterized by a minimum in the solar irradiance (Dalton Minimum, around 1800), as well as intense volcanic activity (for instance, the eruption of the Tambora in 1815), when the increasing atmospheric CO2 concentrations were of minor importance. The reconstruction methodology is based on accounting the number of extreme events in past, and inferring mean value and standard deviation using the assumption of normal distribution for the climate variables. Results are compared with the behaviour of regional series for the reference period 1960-1990. The comparison of the distribution functions corresponding to 1790-1820 and 1960-1990 periods indicates that during the Dalton Minimum the frequency of droughts and warm winters was lesser than during the reference period, while the frequencies of wet and cold winters were similar. Future research work is outlined.

  6. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.

    2009-01-01

    This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.

  7. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    PubMed

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  8. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  9. Robust Adaptive Beamforming with Sensor Position Errors Using Weighted Subspace Fitting-Based Covariance Matrix Reconstruction.

    PubMed

    Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang

    2018-05-08

    When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.

  10. A path reconstruction method integrating dead-reckoning and position fixes applied to humpback whales.

    PubMed

    Wensveen, Paul J; Thomas, Len; Miller, Patrick J O

    2015-01-01

    Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.

  11. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  12. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  13. Correcting electrode modelling errors in EIT on realistic 3D head models.

    PubMed

    Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo

    2015-12-01

    Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.

  14. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  15. Compressed sensing and the reconstruction of ultrafast 2D NMR data: Principles and biomolecular applications.

    PubMed

    Shrot, Yoav; Frydman, Lucio

    2011-04-01

    A topic of active investigation in 2D NMR relates to the minimum number of scans required for acquiring this kind of spectra, particularly when these are dictated by sampling rather than by sensitivity considerations. Reductions in this minimum number of scans have been achieved by departing from the regular sampling used to monitor the indirect domain, and relying instead on non-uniform sampling and iterative reconstruction algorithms. Alternatively, so-called "ultrafast" methods can compress the minimum number of scans involved in 2D NMR all the way to a minimum number of one, by spatially encoding the indirect domain information and subsequently recovering it via oscillating field gradients. Given ultrafast NMR's simultaneous recording of the indirect- and direct-domain data, this experiment couples the spectral constraints of these orthogonal domains - often calling for the use of strong acquisition gradients and large filter widths to fulfill the desired bandwidth and resolution demands along all spectral dimensions. This study discusses a way to alleviate these demands, and thereby enhance the method's performance and applicability, by combining spatial encoding with iterative reconstruction approaches. Examples of these new principles are given based on the compressed-sensed reconstruction of biomolecular 2D HSQC ultrafast NMR data, an approach that we show enables a decrease of the gradient strengths demanded in this type of experiments by up to 80%. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Remote Sensing of Clouds for Solar Forecasting Applications

    NASA Astrophysics Data System (ADS)

    Mejia, Felipe

    A method for retrieving cloud optical depth (tauc) using a UCSD developed ground- based Sky Imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various tauc produced by a Radiative Transfer Model (RTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (SZA), tau c , solar pixel an- gle/scattering angle (SPA), and pixel zenith angle/view angle (PZA). The effects of these parameters are described and the functions for radiance, Ilambda (tau c ,SZA,SPA,PZA) , and the red-blue ratio, RBR(tauc ,SZA,SPA,PZA) , are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for tau c , where RBR increases with tauc up to about tauc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Imeaslambda (SPA,PZA) , in addition to RBRmeas (SPA,PZA ) to obtain a unique solution for tauc . The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min [ MH96a ] method for overcast skies. tau c values ranged from 0-80 with values over 80 being capped and registered as 80. A tauc RMSE of 2.5 between the Min method [ MH96b ] and the USI are observed. The MWR and USI have an RMSE of 2.2 which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms. Using the RRBR tauc estimate as an input we then explore the potential of using tomographic techniques for 3-D cloud reconstruction. The Algebraic Reconstruction Technique (ART) is applied to optical depth maps from sky images to reconstruct 3-D cloud extinction coefficients. Reconstruction accuracy is explored for different products, including surface irradiance, extinction coefficients and Liquid Water Path, as a function of the number of available sky imagers (SIs) and setup distance. Increasing the number of cameras improves the accuracy of the 3-D reconstruction: For surface irradiance, the error decreases significantly up to four imagers at which point the improvements become marginal while k error continues to decrease with more cameras. The ideal distance between imagers was also explored: For a cloud height of 1 km, increasing distance up to 3 km (the domain length) improved the 3-D reconstruction for surface irradiance, while k error continued to decrease with increasing decrease. An iterative reconstruction technique was also used to improve the results of the ART by minimizing the error between input images and reconstructed simulations. For the best case of a nine imager deployment, the ART and iterative method resulted in 53.4% and 33.6% mean average error (MAE) for the extinction coefficients, respectively. The tomographic methods were then tested on real world test cases in the Uni- versity of California San Diego's (UCSD) solar testbed. Five UCSD sky imagers (USI) were installed across the testbed based on the best performing distances in simulations. Topographic obstruction is explored as a source of error by analyzing the increased error with obstruction in the field of view of the horizon. As more of the horizon is obstructed the error increases. If at least a field of view of 70° is available for the camera the accuracy is within 2% of the full field of view. Errors caused by stray light are also explored by removing the circumsolar region from images and comparing the cloud reconstruction to a full image. Removing less than 30% of the circumsolar region image and GHI errors were within 0.2% of the full image while errors in k increased 1%. Removing more than 30° around the sun resulted in inaccurate cloud reconstruction. Using four of the five USI a 3D cloud is reconstructed and compared to the fifth camera. The image of the fifth camera (excluded from the reconstruction) was then simulated and found to have a 22.9% error compared to the ground truth.

  17. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  18. Effective Algorithm for Detection and Correction of the Wave Reconstruction Errors Caused by the Tilt of Reference Wave in Phase-shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying

    2010-04-01

    In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.

  19. WE-A-17A-10: Fast, Automatic and Accurate Catheter Reconstruction in HDR Brachytherapy Using An Electromagnetic 3D Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulin, E; Racine, E; Beaulieu, L

    2014-06-15

    Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantomsmore » were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical.« less

  20. Assessment of Minimum 124I Activity Required in Uptake Measurements Before Radioiodine Therapy for Benign Thyroid Diseases.

    PubMed

    Gabler, Anja S; Kühnel, Christian; Winkens, Thomas; Freesmeyer, Martin

    2016-08-01

    This study aimed to assess a hypothetical minimum administered activity of (124)I required to achieve comparability between pretherapeutic radioiodine uptake (RAIU) measurements by (124)I PET/CT and by (131)I RAIU probe, the clinical standard. In addition, the impact of different reconstruction algorithms on (124)I RAIU and the evaluation of pixel noise as a parameter for image quality were investigated. Different scan durations were simulated by different reconstruction intervals of 600-s list-mode PET datasets (including 15 intervals up to 600 s and 5 different reconstruction algorithms: filtered-backprojection and 4 iterative techniques) acquired 30 h after administration of 1 MBq of (124)I. The Bland-Altman method was used to compare mean (124)I RAIU levels versus mean 3-MBq (131)I RAIU levels (clinical standard). The data of 37 patients with benign thyroid diseases were assessed. The impact of different reconstruction lengths on pixel noise was investigated for all 5 of the (124)I PET reconstruction algorithms. A hypothetical minimum activity was sought by means of a proportion equation, considering that the length of a reconstruction interval equates to a hypothetical activity. Mean (124)I RAIU and (131)I RAIU already showed high levels of agreement for reconstruction intervals of as short as 10 s, corresponding to a hypothetical minimum activity of 0.017 MBq of (124)I. The iterative algorithms proved generally superior to the filtered-backprojection algorithm. (124)I RAIU showed a trend toward higher levels than (131)I RAIU if the influence of retrosternal tissue was not considered, which was proven to be the cause of a slight overestimation by (124)I RAIU measurement. A hypothetical minimum activity of 0.5 MBq of (124)I obtained with iterative reconstruction appeared sufficient both visually and with regard to pixel noise. This study confirms the potential of (124)I RAIU measurement as an alternative method for (131)I RAIU measurement in benign thyroid disease and suggests that reducing the administered activity is an option. CT information is particularly important in cases of retrosternal expansion. The results are relevant because (124)I PET/CT allows additional diagnostic means, that is, the possibility of performing fusion imaging with ultrasound. (124)I PET/CT might be an alternative, especially when hybrid (123)I SPECT/CT is not available. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  2. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  3. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  4. Quantum-state comparison and discrimination

    NASA Astrophysics Data System (ADS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2018-05-01

    We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.

  5. Cone beam CT imaging with limited angle of projections and prior knowledge for volumetric verification of non-coplanar beam radiation therapy: a proof of concept study

    NASA Astrophysics Data System (ADS)

    Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang

    2013-11-01

    Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.

  6. Comparison of Low Cost Photogrammetric Survey with Tls and Leica Pegasus Backpack 3d Modelss

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Piragnolo, M.; Vettore, A.

    2017-11-01

    This paper considers Leica backpack and photogrammetric surveys of a mediaeval bastion in Padua, Italy. Furhtermore, terrestrial laser scanning (TLS) survey is considered in order to provide a state of the art reconstruction of the bastion. Despite control points are typically used to avoid deformations in photogrammetric surveys and ensure correct scaling of the reconstruction, in this paper a different approach is considered: this work is part of a project aiming at the development of a system exploiting ultra-wide band (UWB) devices to provide correct scaling of the reconstruction. In particular, low cost Pozyx UWB devices are used to estimate camera positions during image acquisitions. Then, in order to obtain a metric reconstruction, scale factor in the photogrammetric survey is estimated by comparing camera positions obtained from UWB measurements with those obtained from photogrammetric reconstruction. Compared with the TLS survey, the considered photogrammetric model of the bastion results in a RMSE of 21.9cm, average error 13.4cm, and standard deviation 13.5cm. Excluding the final part of the bastion left wing, where the presence of several poles make reconstruction more difficult, (RMSE) fitting error is 17.3cm, average error 11.5cm, and standard deviation 9.5cm. Instead, comparison of Leica backpack and TLS surveys leads to an average error of 4.7cm and standard deviation 0.6cm (4.2cm and 0.3cm, respectively, by excluding the final part of the left wing).

  7. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  8. Ultrasonic tracking of shear waves using a particle filter.

    PubMed

    Ingle, Atul N; Ma, Chi; Varghese, Tomy

    2015-11-01

    This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques.

  9. Minimum training criteria for police traffic accident reconstruction.

    DOT National Transportation Integrated Search

    1987-03-01

    This report summarizes the development of a set of minimum training criteria for police accident reconstructionists. Prior to this effort, no nationally accepted standards or criteria were recognized for evaluating the qualifications of police engage...

  10. Autumn-winter minimum temperature changes in the southern Sikhote-Alin mountain range of northeastern Asia since 1529 AD

    NASA Astrophysics Data System (ADS)

    Ukhvatkina, Olga N.; Omelko, Alexander M.; Zhmerenetsky, Alexander A.; Petrenko, Tatyana Y.

    2018-01-01

    The aim of our research was to reconstruct climatic parameters (for the first time for the Sikhote-Alin mountain range) and to compare them with global climate fluctuations. As a result, we have found that one of the most important limiting factors for the study area is the minimum temperatures of the previous autumn-winter season (August-December), and this finding perfectly conforms to that in other territories. We reconstructed the previous August-December minimum temperature for 485 years, from 1529 to 2014. We found 12 cold periods (1535-1540, 1550-1555, 1643-1649, 1659-1667, 1675-1689, 1722-1735, 1791-1803, 1807-1818, 1822-1827, 1836-1852, 1868-1887, 1911-1925) and seven warm periods (1560-1585, 1600-1610, 1614-1618, 1738-1743, 1756-1759, 1776-1781, 1944-2014). These periods correlate well with reconstructed data for the Northern Hemisphere and the neighboring territories of China and Japan. Our reconstruction has 3-, 9-, 20-, and 200-year periods, which may be in line with high-frequency fluctuations in El Niño-Southern Oscillation (ENSO), the short-term solar cycle, Pacific Decadal Oscillation (PDO) fluctuations, and the 200-year solar activity cycle, respectively. We suppose that the temperature of the North Pacific, expressed by the PDO may make a major contribution to regional climate variations. We also assume that the regional climatic response to solar activity becomes apparent in the temperature changes in the northern part of Pacific Ocean and corresponds to cold periods during the solar minimum. These comparisons show that our climatic reconstruction based on tree ring chronology for this area may potentially provide a proxy record for long-term, large-scale past temperature patterns for northeastern Asia. The reconstruction reflects the global traits and local variations in the climatic processes of the southern territory of the Russian Far East for more than the past 450 years.

  11. Tchebichef moment transform on image dithering for mobile applications

    NASA Astrophysics Data System (ADS)

    Ernawan, Ferda; Abu, Nur Azman; Rahmalan, Hidayah

    2012-04-01

    Currently, mobile image applications spend a lot of computing process to display images. A true color raw image contains billions of colors and it consumes high computational power in most mobile image applications. At the same time, mobile devices are only expected to be equipped with lower computing process and minimum storage space. Image dithering is a popular technique to reduce the numbers of bit per pixel at the expense of lower quality image displays. This paper proposes a novel approach on image dithering using 2x2 Tchebichef moment transform (TMT). TMT integrates a simple mathematical framework technique using matrices. TMT coefficients consist of real rational numbers. An image dithering based on TMT has the potential to provide better efficiency and simplicity. The preliminary experiment shows a promising result in term of error reconstructions and image visual textures.

  12. Configuration optimization of laser guide stars and wavefront correctors for multi-conjugation adaptive optics

    NASA Astrophysics Data System (ADS)

    Xuan, Li; He, Bin; Hu, Li-Fa; Li, Da-Yu; Xu, Huan-Yu; Zhang, Xing-Yun; Wang, Shao-Xin; Wang, Yu-Kun; Yang, Cheng-Liang; Cao, Zhao-Liang; Mu, Quan-Quan; Lu, Xing-Hai

    2016-09-01

    Multi-conjugation adaptive optics (MCAOs) have been investigated and used in the large aperture optical telescopes for high-resolution imaging with large field of view (FOV). The atmospheric tomographic phase reconstruction and projection of three-dimensional turbulence volume onto wavefront correctors, such as deformable mirrors (DMs) or liquid crystal wavefront correctors (LCWCs), is a very important step in the data processing of an MCAO’s controller. In this paper, a method according to the wavefront reconstruction performance of MCAO is presented to evaluate the optimized configuration of multi laser guide stars (LGSs) and the reasonable conjugation heights of LCWCs. Analytical formulations are derived for the different configurations and are used to generate optimized parameters for MCAO. Several examples are given to demonstrate our LGSs configuration optimization method. Compared with traditional methods, our method has minimum wavefront tomographic error, which will be helpful to get higher imaging resolution at large FOV in MCAO. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194) and the State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.

  13. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284

  14. A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.

    PubMed

    Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M

    2011-01-20

    A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.

  15. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  16. 76 FR 62835 - Notice of Inventory Completion: Fort Lewis College, Durango, CO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-11

    ... objects are 1 complete bowl, 1 reconstructed piece of pottery, 2 smaller reconstructed pieces of pottery..., Colorado''). In 1967-1968, human remains representing a minimum of six individuals were removed from a site...

  17. Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.

    PubMed

    Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A

    2007-01-01

    CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.

  18. Constrained signal reconstruction from wavelet transform coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1991-12-31

    A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less

  19. P-T Path and Nd-isotopes of Garnet Pyroxenite Xenoliths From Salt Lake Crater, Oahu

    NASA Astrophysics Data System (ADS)

    Ichitsubo, N.; Takahashi, E.; Clague, D. A.

    2001-12-01

    Abundant garnet pyroxenite and spinel lherzolite xenoliths are found in Salt Lake Crater (SLC) in Oahu, Hawaii [Jackson and Wright, 1970]. The SLC pyroxenite suite xenoliths (olivine-poor type) have complex exsolution textures that were probably formed during a slow cooling. In this study, we used digital image software to obtain modal data of exsolved phases in the host pyroxene using backscattered electron images (BEIs). The abundances of the exsolved phases were multiplied by the phase compositions determined by electron probe micro-analyzer (EPMA) to reconstruct pyroxene compositions prior to exsolution. In order to evaluate the error in this calculation, we recalculated the reconstructed pyroxene compositions using the different pyroxene pairs. Reconstructed clinopyroxenes in each sample have almost no variations (MgO, CaO +/-1wt %, FeO +/-0.5wt % and the other oxides ~+/-0.1wt %). Reconstructed orthopyroxenes are more variable in MgO, CaO (+/-2wt %) and FeO (+/-1wt %) than reconstructed clinopyroxenes, but the other oxides have only limited variations ( ~+/-0.5wt %). These compositions were used to calculate igneous stage (magmatic) P-T conditions based on the geothermometers and geobarometers of Wells [1977] and Brey and Kohler [1990] Following assumptions are made: (1) the reconstructed pyroxene compositions are the final record in the primary igneous stage, and (2) cores of the largest garnet grains in each sample record the primary igneous stage composition.. The recalculation using the different pairs of reconstructed pyroxenes show the uncertainty to be +/- 30° C and 0.1 GPa. These appear to be small compared to the large intrinsic errors of geothermometer and geobarometers (+/-20° -35° C and +/- 0.3-0.5 GPa). Estimated P-T conditions for garnet pyroxenites are 1.5-2.2 GPa, 1000° -1100° C in the final reequilibration stage and 2.2-2.6 GPa (at maximum), 1150° -1300° C (at minimum) in the igneous stage. The all samples show ca. 200° C cooling and 0.5 GPa decompression. This implies that the garnet pyroxenites cooled ca. 200° C to develop the observed complex exsolution and may have risen from about 70-80 km to 50-65 km depth. Glass pockets and fine minerals (olivine, pyroxene, spinel) occur in the SLC garnet pyroxenite xenoliths. Amphibole and phlogopite, which may have crystallized by metasomatism, are common accessory minerals in them. In order to study the nature of metasomatism as revealed by the glass pockets and fine aggregate of spinel and pyroxene, Nd-isotope study on the SLC xenoliths is under way.

  20. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

  1. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    PubMed

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  2. SU-C-206-03: Metal Artifact Reduction in X-Ray Computed Tomography Based On Local Anatomical Similarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Yang, X; Rosenfield, J

    Purpose: Metal implants such as orthopedic hardware and dental fillings cause severe bright and dark streaking in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. Additionally, such artifacts negatively impact patient set-up in image guided radiation therapy (IGRT). In this work, we propose a novel method for metal artifact reduction which utilizes the anatomical similarity between neighboring CT slices. Methods: Neighboring CT slices show similar anatomy. Based on this anatomical similarity, the proposed method replaces corrupted CT pixels with pixels from adjacent, artifact-free slices. A gamma map,more » which is the weighted summation of relative HU error and distance error, is calculated for each pixel in the artifact-corrupted CT image. The minimum value in each pixel’s gamma map is used to identify a pixel from the adjacent CT slice to replace the corresponding artifact-corrupted pixel. This replacement only occurs if the minimum value in a particular pixel’s gamma map is larger than a threshold. The proposed method was evaluated with clinical images. Results: Highly attenuating dental fillings and hip implants cause severe streaking artifacts on CT images. The proposed method eliminates the dark and bright streaking and improves the implant delineation and visibility. In particular, the image non-uniformity in the central region of interest was reduced from 1.88 and 1.01 to 0.28 and 0.35, respectively. Further, the mean CT HU error was reduced from 328 HU and 460 HU to 60 HU and 36 HU, respectively. Conclusions: The proposed metal artifact reduction method replaces corrupted image pixels with pixels from neighboring slices that are free of metal artifacts. This method proved capable of suppressing streaking artifacts, improving HU accuracy and image detectability.« less

  3. Orbit determination of highly elliptical Earth orbiters using improved Doppler data-processing modes

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.

    1995-01-01

    A navigation error covariance analysis of four highly elliptical Earth orbits is described, with apogee heights ranging from 20,000 to 76,800 km and perigee heights ranging from 1,000 to 5,000 km. This analysis differs from earlier studies in that improved navigation data-processing modes were used to reduce the radio metric data. For this study, X-band (8.4-GHz) Doppler data were assumed to be acquired from two Deep Space Network radio antennas and reconstructed orbit errors propagated over a single day. Doppler measurements were formulated as total-count phase measurements and compared to the traditional formulation of differenced-count frequency measurements. In addition, an enhanced data-filtering strategy was used, which treated the principal ground system calibration errors affecting the data as filter parameters. Results suggest that a 40- to 60-percent accuracy improvement may be achievable over traditional data-processing modes in reconstructed orbit errors, with a substantial reduction in reconstructed velocity errors at perigee. Historically, this has been a regime in which stringent navigation requirements have been difficult to meet by conventional methods.

  4. Navigator alignment using radar scan

    DOEpatents

    Doerry, Armin W.; Marquette, Brandeis

    2016-04-05

    The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.

  5. Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values

    DTIC Science & Technology

    2016-12-01

    UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square  error (MMSE)  estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem.                     3    Introduction      Minimum mean‐ square  error (MMSE) estimation is applied to target imaging with synthetic aperture

  6. Optimization of the Hartmann-Shack microlens array

    NASA Astrophysics Data System (ADS)

    de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William

    2011-04-01

    In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.

  7. Error analysis and system optimization of non-null aspheric testing system

    NASA Astrophysics Data System (ADS)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  8. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less

  9. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel.

    PubMed

    Selvaprabhu, Poongundran; Chinnadurai, Sunil; Li, Jun; Lee, Moon Ho

    2017-08-17

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K -user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes.

  10. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel

    PubMed Central

    Li, Jun; Lee, Moon Ho

    2017-01-01

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K-user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes. PMID:28817071

  11. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstructionmore » to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion-blurring artifacts are present, leading to a 24.4% relative reconstruction error in the NACT phantom. View aliasing artifacts are present in 4D-CBCT reconstructed by FDK from 20 projections, with a relative error of 32.1%. When total variation minimization is used to reconstruct 4D-CBCT, the relative error is 18.9%. Image quality of 4D-CBCT is substantially improved by using the SMEIR algorithm and relative error is reduced to 7.6%. The maximum error (MaxE) of tumor motion determined from the DVF obtained by demons registration on a FDK-reconstructed 4D-CBCT is 3.0, 2.3, and 7.1 mm along left–right (L-R), anterior–posterior (A-P), and superior–inferior (S-I) directions, respectively. From the DVF obtained by demons registration on 4D-CBCT reconstructed by total variation minimization, the MaxE of tumor motion is reduced to 1.5, 0.5, and 5.5 mm along L-R, A-P, and S-I directions. From the DVF estimated by SMEIR algorithm, the MaxE of tumor motion is further reduced to 0.8, 0.4, and 1.5 mm along L-R, A-P, and S-I directions, respectively.Conclusions: The proposed SMEIR algorithm is able to estimate a motion model and reconstruct motion-compensated 4D-CBCT. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.« less

  12. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  13. High-resolution three-dimensional imaging with compress sensing

    NASA Astrophysics Data System (ADS)

    Wang, Jingyi; Ke, Jun

    2016-10-01

    LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.

  14. Adaptive feedforward control of non-minimum phase structural systems

    NASA Astrophysics Data System (ADS)

    Vipperman, J. S.; Burdisso, R. A.

    1995-06-01

    Adaptive feedforward control algorithms have been effectively applied to stationary disturbance rejection. For structural systems, the ideal feedforward compensator is a recursive filter which is a function of the transfer functions between the disturbance and control inputs and the error sensor output. Unfortunately, most control configurations result in a non-minimum phase control path; even a collocated control actuator and error sensor will not necessarily produce a minimum phase control path in the discrete domain. Therefore, the common practice is to choose a suitable approximation of the ideal compensator. In particular, all-zero finite impulse response (FIR) filters are desirable because of their inherent stability for adaptive control approaches. However, for highly resonant systems, large order filters are required for broadband applications. In this work, a control configuration is investigated for controlling non-minimum phase lightly damped structural systems. The control approach uses low order FIR filters as feedforward compensators in a configuration that has one more control actuator than error sensors. The performance of the controller was experimentally evaluated on a simply supported plate under white noise excitation for a two-input, one-output (2I1O) system. The results show excellent error signal reduction, attesting to the effectiveness of the method.

  15. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  16. Adaptive color halftoning for minimum perceived error using the blue noise mask

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  17. Iterative simulated quenching for designing irregular-spot-array generators.

    PubMed

    Gillet, J N; Sheng, Y

    2000-07-10

    We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.

  18. Climate variability in Andalusia (southern Spain) during the period 1701-1850 AD from documentary sources: evaluation and comparison with climate model simulations

    NASA Astrophysics Data System (ADS)

    Rodrigo, F. S.; Gómez-Navarro, J. J.; Montávez Gómez, J. P.

    2011-07-01

    In this work, a reconstruction of climatic conditions in Andalusia (southern Iberia Peninsula) during the period 1701-1850, as well as an evaluation of its associated uncertainties, is presented. This period is interesting because it is characterized by a minimum in the solar irradiance (Dalton Minimum, around 1800), as well as intense volcanic activity (for instance, the eruption of the Tambora in 1815), when the increasing atmospheric CO2 concentrations were of minor importance. The reconstruction is based on the analysis of a wide variety of documentary data. The reconstruction methodology is based on accounting the number of extreme events in past, and inferring mean value and standard deviation using the assumption of normal distribution for the seasonal means of climate variables. This reconstruction methodology is tested within the pseudoreality of a high-resolution paleoclimate simulation performed with the regional climate model MM5 coupled to the global model ECHO-G. Results show that the reconstructions are influenced by the reference period chosen and the threshold values used to define extreme values. This creates uncertainties which are assesed within the context of the climate simulation. An ensemble of reconstructions was obtained using two different reference periods and two pairs of percentiles as threshold values. Results correspond to winter temperature, and winter, spring, and autumn rainfall, and they are compared with simulations of the climate model for the considered period. The comparison of the distribution functions corresponding to 1790-1820 and 1960-1990 periods indicates that during the Dalton Minimum the frequency of dry and warm (wet and cold) winters was lesser (higher) than during the reference period. In spring and autumn it was detected an increase (decrease) in the frequency of wet (dry) seasons. Future research challenges are outlined.

  19. Air data position-error calibration using state reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.; Larson, T. J.; Ehernberger, L. J.

    1984-01-01

    During the highly maneuverable aircraft technology (HiMAT) flight test program recently completed at NASA Ames Research Center's Dryden Flight Research Facility, numerous problems were experienced in airspeed calibration. This necessitated the use of state reconstruction techniques to arrive at a position-error calibration. For the HiMAT aircraft, most of the calibration effort was expended on flights in which the air data pressure transducers were not performing accurately. Following discovery of this problem, the air data transducers of both aircraft were wrapped in heater blankets to correct the problem. Additional calibration flights were performed, and from the resulting data a satisfactory position-error calibration was obtained. This calibration and data obtained before installation of the heater blankets were used to develop an alternate calibration method. The alternate approach took advantage of high-quality inertial data that was readily available. A linearized Kalman filter (LKF) was used to reconstruct the aircraft's wind-relative trajectory; the trajectory was then used to separate transducer measurement errors from the aircraft position error. This calibration method is accurate and inexpensive. The LKF technique has an inherent advantage of requiring that no flight maneuvers be specially designed for airspeed calibrations. It is of particular use when the measurements of the wind-relative quantities are suspected to have transducer-related errors.

  20. Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.

    PubMed

    Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A

    2015-08-01

    Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  2. From bed topography to ice thickness: GlaRe, a GIS tool to reconstruct the surface of palaeoglaciers

    NASA Astrophysics Data System (ADS)

    Pellitero, Ramon; Rea, Brice; Spagnolo, Matteo; Bakke, Jostein; Ivy-Ochs, Susan; Frew, Craig; Hughes, Philip; Ribolini, Adriano; Renssen, Hans; Lukas, Sven

    2016-04-01

    We present GlaRe, A GIS tool that automatically reconstructs the 3D geometry for palaeoglaciers given the bed topography. This tool utilises a numerical approach and can work using a minimum of morphological evidence i.e. the position of the palaeoglacier front. The numerical approach is based on an iterative solution to the perfect plasticity assumption for ice rheology, explained in Benn and Hulton (2010). The tool can be run in ArcGIS 10.1 (ArcInfo license) and later updates and the toolset is written in Python code. The GlaRe toolbox presented in this paper implements a well-established approach for the determination of palaeoglacier equilibrium profiles. Significantly it permits users to quickly run multiple glacier reconstructions which were previously very laborious and time consuming (typically days for a single valley glacier). The implementation of GlaRe will facilitate the reconstruction of large numbers of palaeoglaciers which will provide opportunities for such research addressing at least two fundamental problems: 1. Investigation of the dynamics of palaeoglaciers. Glacier reconstructions are often based on a rigorous interpretation of glacial landforms but not always sufficient attention and/or time has been given to the actual reconstruction of the glacier surface, which is crucial for the calculation of palaeoglacier ELAs and subsequent derivation of quantitative palaeoclimatic data. 2. the ability to run large numbers of reconstructions and over much larger spatial areas provides an opportunity to undertake palaeoglaciers reconstructions across entire mountain, ranges, regions or even continents, allowing climatic gradients and atmospheric circulation patterns to be elucidated. The tool performance has been evaluated by comparing two extant glaciers, an icefield and a cirque/valley glacier from which the subglacial topography is known with a basic reconstruction using GlaRe. Results from the comparisons between extant glacier surfaces and modelled ones show very similar ELA values on the order of 10-20 meter error (which would account for a 0.065-0.13 K degree variation on a typical -6.5 K altitudinal gradient), and these can be improved further by increasing the number of flowlines and using F factors where needed. GlaRe is able to quickly generate robust palaeoglacier surfaces based on the very limited inputs often available from the geomorphological record.

  3. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  4. Automatic alignment for three-dimensional tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.

    2018-02-01

    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.

  5. Quantitative neuroanatomy for connectomics in Drosophila

    PubMed Central

    Schneider-Mizell, Casey M; Gerhard, Stephan; Longair, Mark; Kazimiers, Tom; Li, Feng; Zwart, Maarten F; Champion, Andrew; Midgley, Frank M; Fetter, Richard D; Saalfeld, Stephan; Cardona, Albert

    2016-01-01

    Neuronal circuit mapping using electron microscopy demands laborious proofreading or reconciliation of multiple independent reconstructions. Here, we describe new methods to apply quantitative arbor and network context to iteratively proofread and reconstruct circuits and create anatomically enriched wiring diagrams. We measured the morphological underpinnings of connectivity in new and existing reconstructions of Drosophila sensorimotor (larva) and visual (adult) systems. Synaptic inputs were preferentially located on numerous small, microtubule-free 'twigs' which branch off a single microtubule-containing 'backbone'. Omission of individual twigs accounted for 96% of errors. However, the synapses of highly connected neurons were distributed across multiple twigs. Thus, the robustness of a strong connection to detailed twig anatomy was associated with robustness to reconstruction error. By comparing iterative reconstruction to the consensus of multiple reconstructions, we show that our method overcomes the need for redundant effort through the discovery and application of relationships between cellular neuroanatomy and synaptic connectivity. DOI: http://dx.doi.org/10.7554/eLife.12059.001 PMID:26990779

  6. Improving reflectance reconstruction from tristimulus values by adaptively combining colorimetric and reflectance similarities

    NASA Astrophysics Data System (ADS)

    Cao, Bin; Liao, Ningfang; Li, Yasheng; Cheng, Haobo

    2017-05-01

    The use of spectral reflectance as fundamental color information finds application in diverse fields related to imaging. Many approaches use training sets to train the algorithm used for color classification. In this context, we note that the modification of training sets obviously impacts the accuracy of reflectance reconstruction based on classical reflectance reconstruction methods. Different modifying criteria are not always consistent with each other, since they have different emphases; spectral reflectance similarity focuses on the deviation of reconstructed reflectance, whereas colorimetric similarity emphasizes human perception. We present a method to improve the accuracy of the reconstructed spectral reflectance by adaptively combining colorimetric and spectral reflectance similarities. The different exponential factors of the weighting coefficients were investigated. The spectral reflectance reconstructed by the proposed method exhibits considerable improvements in terms of the root-mean-square error and goodness-of-fit coefficient of the spectral reflectance errors as well as color differences under different illuminants. Our method is applicable to diverse areas such as textiles, printing, art, and other industries.

  7. Hyper-X Post-Flight Trajectory Reconstruction

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Tartabini, Paul V.; Blanchard, RobertC.; Kirsch, Michael; Toniolo, Matthew D.

    2004-01-01

    This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X{43A/Hyper{X high speed research vehicle, and its implementation for the reconstruction and analysis of ight test data. Extended Kalman ltering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the ltering routines. Additionally, smoothing algorithms have been implemented in which the nal value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from ight data.

  8. Planning of skull reconstruction based on a statistical shape model combined with geometric morphometrics.

    PubMed

    Fuessinger, Marc Anton; Schwarz, Steffen; Cornelius, Carl-Peter; Metzger, Marc Christian; Ellis, Edward; Probst, Florian; Semper-Hogg, Wiebke; Gass, Mathieu; Schlager, Stefan

    2018-04-01

    Virtual reconstruction of large cranial defects is still a challenging task. The current reconstruction procedures depend on the surgeon's experience and skills in planning the reconstruction based on mirroring and manual adaptation. The aim of this study is to propose and evaluate a computer-based approach employing a statistical shape model (SSM) of the cranial vault. An SSM was created based on 131 CT scans of pathologically unaffected adult crania. After segmentation, the resulting surface mesh of one patient was established as template and subsequently registered to the entire sample. Using the registered surface meshes, an SSM was generated capturing the shape variability of the cranial vault. The knowledge about this shape variation in healthy patients was used to estimate the missing parts. The accuracy of the reconstruction was evaluated by using 31 CT scans not included in the SSM. Both unilateral and bilateral bony defects were created on each skull. The reconstruction was performed using the current gold standard of mirroring the intact to the affected side, and the result was compared to the outcome of our proposed SSM-driven method. The accuracy of the reconstruction was determined by calculating the distances to the corresponding parts on the intact skull. While unilateral defects could be reconstructed with both methods, the reconstruction of bilateral defects was, for obvious reasons, only possible employing the SSM-based method. Comparing all groups, the analysis shows a significantly higher precision of the SSM group, with a mean error of 0.47 mm compared to the mirroring group which exhibited a mean error of 1.13 mm. Reconstructions of bilateral defects yielded only slightly higher estimation errors than those of unilateral defects. The presented computer-based approach using SSM is a precise and simple tool in the field of computer-assisted surgery. It helps to reconstruct large-size defects of the skull considering the natural asymmetry of the cranium and is not limited to unilateral defects.

  9. Ultrasonic tracking of shear waves using a particle filter

    PubMed Central

    Ingle, Atul N.; Ma, Chi; Varghese, Tomy

    2015-01-01

    Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761

  10. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  11. Camera calibration correction in shape from inconsistent silhouette

    USDA-ARS?s Scientific Manuscript database

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  12. The Fate of Meniscus Tears Left in situ at the time of Anterior Cruciate Ligament Reconstruction: A 6-year Follow-up Study from the MOON Cohort

    PubMed Central

    Duchman, Kyle R.; Westermann, Robert W.; Spindler, Kurt P.; Reinke, Emily K.; Huston, Laura J.; Amendola, Annunziato; Wolf, Brian R.

    2016-01-01

    Background The management of meniscus tears identified at the time of primary ACL reconstruction is highly variable and includes repair, meniscectomy, and non-treatment. Hypothesis/Purpose The purpose of this study is to determine the reoperation rate for meniscus tears left untreated at the time of ACL reconstruction with minimum follow-up of 6 years. We hypothesize that small, peripheral tears identified at the time of ACL reconstruction managed with “no treatment” will have successful clinical outcomes. Study Design Retrospective study of a prospective cohort; Level of Evidence, 3 Methods Patients with meniscus tears left untreated at the time of primary ACL reconstruction were identified from a multicenter study group with minimum 6-year follow-up. Patient, tear, and reoperation data were obtained for analysis. Need for reoperation was used as the primary endpoint, with analysis performed to determine patient and tear characteristics associated with reoperation. Results There were 194 patients with 208 meniscus tears (71 medial; 137 lateral) left in situ without treatment with complete follow-up for analysis. Of these, 97.8% of lateral and 94.4% of medial untreated tears required no reoperation. Sixteen tears (7.7%) left in situ without treatment underwent subsequent reoperation: 9 tears (4.3%) underwent reoperation in the setting of revision ACL reconstruction and 7 tears (3.4%) underwent reoperation for isolated meniscus pathology. Patient age was significantly lower in patients requiring reoperation, while tears measuring ≥ 10 mm more frequently required reoperation. Conclusions Lateral and medial meniscus tears left in situ at the time of ACL reconstruction did not require reoperation at minimum 6-year follow-up for 97.8% and 94.4% of tears, respectively. These findings reemphasize the low reoperation rate following non-treatment of small, peripheral lateral meniscus tears while noting less predictable results for medial meniscus tears left without treatment. PMID:26430058

  13. The Fate of Meniscus Tears Left In Situ at the Time of Anterior Cruciate Ligament Reconstruction: A 6-Year Follow-up Study From the MOON Cohort.

    PubMed

    Duchman, Kyle R; Westermann, Robert W; Spindler, Kurt P; Reinke, Emily K; Huston, Laura J; Amendola, Annunziato; Wolf, Brian R

    2015-11-01

    The management of meniscus tears identified at the time of primary anterior cruciate ligament (ACL) reconstruction is highly variable and includes repair, meniscectomy, and nontreatment. The purpose of this study was to determine the reoperation rate for meniscus tears left untreated at the time of ACL reconstruction with a minimum follow-up of 6 years. The hypothesis was that small peripheral tears identified at the time of ACL reconstruction managed with "no treatment" would have successful clinical outcomes. Cohort study; Level of evidence, 3. Patients with meniscus tears left untreated at the time of primary ACL reconstruction were identified from a multicenter study group with a minimum 6-year follow-up. Patient, tear, and reoperation data were obtained for analysis. The need for reoperation was used as the primary endpoint, with analysis performed to determine patient and tear characteristics associated with reoperation. There were 194 patients with 208 meniscus tears (71 medial, 137 lateral) left in situ without treatment with a complete follow-up for analysis. Of these, 97.8% of lateral and 94.4% of medial untreated tears required no reoperation. Sixteen tears (7.7%) left in situ without treatment underwent subsequent reoperation: 9 tears (4.3%) underwent reoperation in the setting of revision ACL reconstruction, and 7 tears (3.4%) underwent reoperation for an isolated meniscus injury. The patient age was significantly lower in patients requiring reoperation, while tears measuring ≥10 mm more frequently required reoperation. Lateral and medial meniscus tears left in situ at the time of ACL reconstruction did not require reoperation at a minimum 6-year follow-up for 97.8% and 94.4% of tears, respectively. These findings re-emphasize the low reoperation rate after the nontreatment of small, peripheral lateral meniscus tears while noting less predictable results for medial meniscus tears left without treatment. © 2015 The Author(s).

  14. Fast GPU-based computation of spatial multigrid multiframe LMEM for PET.

    PubMed

    Nassiri, Moulay Ali; Carrier, Jean-François; Després, Philippe

    2015-09-01

    Significant efforts were invested during the last decade to accelerate PET list-mode reconstructions, notably with GPU devices. However, the computation time per event is still relatively long, and the list-mode efficiency on the GPU is well below the histogram-mode efficiency. Since list-mode data are not arranged in any regular pattern, costly accesses to the GPU global memory can hardly be optimized and geometrical symmetries cannot be used. To overcome obstacles that limit the acceleration of reconstruction from list-mode on the GPU, a multigrid and multiframe approach of an expectation-maximization algorithm was developed. The reconstruction process is started during data acquisition, and calculations are executed concurrently on the GPU and the CPU, while the system matrix is computed on-the-fly. A new convergence criterion also was introduced, which is computationally more efficient on the GPU. The implementation was tested on a Tesla C2050 GPU device for a Gemini GXL PET system geometry. The results show that the proposed algorithm (multigrid and multiframe list-mode expectation-maximization, MGMF-LMEM) converges to the same solution as the LMEM algorithm more than three times faster. The execution time of the MGMF-LMEM algorithm was 1.1 s per million of events on the Tesla C2050 hardware used, for a reconstructed space of 188 x 188 x 57 voxels of 2 x 2 x 3.15 mm3. For 17- and 22-mm simulated hot lesions, the MGMF-LMEM algorithm led on the first iteration to contrast recovery coefficients (CRC) of more than 75 % of the maximum CRC while achieving a minimum in the relative mean square error. Therefore, the MGMF-LMEM algorithm can be used as a one-pass method to perform real-time reconstructions for low-count acquisitions, as in list-mode gated studies. The computation time for one iteration and 60 millions of events was approximately 66 s.

  15. Segmented Separable Footprint Projector for Digital Breast Tomosynthesis and Its application for Subpixel Reconstruction

    PubMed Central

    Zheng, Jiabei; Fessler, Jeffrey A; Chan, Heang-Ping

    2017-01-01

    Purpose Digital forward and back projectors play a significant role in iterative image reconstruction. The accuracy of the projector affects the quality of the reconstructed images. Digital breast tomosynthesis (DBT) often uses the ray-tracing (RT) projector that ignores finite detector element size. This paper proposes a modified version of the separable footprint (SF) projector, called the segmented separable footprint (SG) projector, that calculates efficiently the Radon transform mean value over each detector element. The SG projector is specifically designed for DBT reconstruction because of the large height-to-width ratio of the voxels generally used in DBT. This study evaluates the effectiveness of the SG projector in reducing projection error and improving DBT reconstruction quality. Methods We quantitatively compared the projection error of the RT and the SG projector at different locations and their performance in regular and subpixel DBT reconstruction. Subpixel reconstructions used finer voxels in the imaged volume than the detector pixel size. Subpixel reconstruction with RT projector uses interpolated projection views as input to provide adequate coverage of the finer voxel grid with the traced rays. Subpixel reconstruction with the SG projector, however, uses the measured projection views without interpolation. We simulated DBT projections of a test phantom using CatSim (GE Global Research, Niskayuna, NY) under idealized imaging conditions without noise and blur, to analyze the effects of the projectors and subpixel reconstruction without other image degrading factors. The phantom contained an array of horizontal and vertical line pair patterns (1 to 9.5 line pairs/mm) and pairs of closely spaced spheres (diameters 0.053 to 0.5 mm) embedded at the mid-plane of a 5-cm-thick breast-tissue-equivalent uniform volume. The images were reconstructed with regular simultaneous algebraic reconstruction technique (SART) and subpixel SART using different projectors. The resolution and contrast of the test objects in the reconstructed images and the computation times were compared under different reconstruction conditions. Results The SG projector reduced the projector error by 1 to 2 orders of magnitude at most locations. In the worst case, the SG projector still reduced the projection error by about 50%. In the DBT reconstructed slices parallel to the detector plane, the SG projector not only increased the contrast of the line pairs and spheres, but also produced more smooth and continuous reconstructed images whereas the discrete and sparse nature of the RT projector caused artifacts appearing as patterned noise. For subpixel reconstruction, the SG projector significantly increased object contrast and computation speed, especially for high subpixel ratios, compared with the RT projector implemented with accelerated Siddon’s algorithm. The difference in the depth resolution among the projectors is negligible under the conditions studied. Our results also demonstrated that subpixel reconstruction can improve the spatial resolution of the reconstructed images, and can exceed the Nyquist limit of the detector under some conditions. Conclusions The SG projector was more accurate and faster than the RT projector. The SG projector also substantially reduced computation time and improved the image quality for the tomosynthesized images with and without subpixel reconstruction. PMID:28058719

  16. SU-D-12A-07: Optimization of a Moving Blocker System for Cone-Beam Computed Tomography Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, L; Yan, H; Jia, X

    2014-06-01

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less

  17. Nuclear norm-based 2-DPCA for extracting features from images.

    PubMed

    Zhang, Fanlong; Yang, Jian; Qian, Jianjun; Xu, Yong

    2015-10-01

    The 2-D principal component analysis (2-DPCA) is a widely used method for image feature extraction. However, it can be equivalently implemented via image-row-based principal component analysis. This paper presents a structured 2-D method called nuclear norm-based 2-DPCA (N-2-DPCA), which uses a nuclear norm-based reconstruction error criterion. The nuclear norm is a matrix norm, which can provide a structured 2-D characterization for the reconstruction error image. The reconstruction error criterion is minimized by converting the nuclear norm-based optimization problem into a series of F-norm-based optimization problems. In addition, N-2-DPCA is extended to a bilateral projection-based N-2-DPCA (N-B2-DPCA). The virtue of N-B2-DPCA over N-2-DPCA is that an image can be represented with fewer coefficients. N-2-DPCA and N-B2-DPCA are applied to face recognition and reconstruction and evaluated using the Extended Yale B, CMU PIE, FRGC, and AR databases. Experimental results demonstrate the effectiveness of the proposed methods.

  18. An oscillation-free flow solver based on flux reconstruction

    NASA Astrophysics Data System (ADS)

    Aguerre, Horacio J.; Pairetti, Cesar I.; Venier, Cesar M.; Márquez Damián, Santiago; Nigro, Norberto M.

    2018-07-01

    In this paper, a segregated algorithm is proposed to suppress high-frequency oscillations in the velocity field for incompressible flows. In this context, a new velocity formula based on a reconstruction of face fluxes is defined eliminating high-frequency errors. In analogy to the Rhie-Chow interpolation, this approach is equivalent to including a flux-based pressure gradient with a velocity diffusion in the momentum equation. In order to guarantee second-order accuracy of the numerical solver, a set of conditions are defined for the reconstruction operator. To arrive at the final formulation, an outlook over the state of the art regarding velocity reconstruction procedures is presented comparing them through an error analysis. A new operator is then obtained by means of a flux difference minimization satisfying the required spatial accuracy. The accuracy of the new algorithm is analyzed by performing mesh convergence studies for unsteady Navier-Stokes problems with analytical solutions. The stabilization properties of the solver are then tested in a problem where spurious numerical oscillations arise for the velocity field. The results show a remarkable performance of the proposed technique eliminating high-frequency errors without losing accuracy.

  19. 360° Fourier transform profilometry in surface reconstruction for fluorescence molecular tomography.

    PubMed

    Shi, Bi'er; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-05-01

    Fluorescence molecular tomography (FMT) is an emerging tool in the observation of diseases. A fast and accurate surface reconstruction of the experimental object is needed as a boundary constraint for FMT reconstruction. In this paper, an automatic, noncontact, and 3-D surface reconstruction method named 360◦ Fourier transform profilometry (FTP) is proposed to reconstruct 3-D surface profiles for FMT system. This method can reconstruct 360◦ integrated surface profiles utilizing the single-frame FTP at different angles. Results show that the relative mean error of the surface reconstruction of this method is less than 1.4% in phantom experiments, and is no more than 2.9% in mouse experiments in vivo. Compared with the Radon transform method, the proposed method reduces the computation time by more than 90% with a minimal error increase. At last, a combined 360◦ FTP/FMT experiment is conducted on a nude mouse. Not only can the 360◦ FTP system operate with the FMT system simultaneously, but it can also help to monitor the status of animals. Moreover, the 360◦ FTP system is independent of FMT system and can be performed to reconstruct the surface by itself.

  20. Evaluation of reconstruction errors and identification of artefacts for JET gamma and neutron tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk; Tiseanu, Ion; Zoita, Vasile

    The Joint European Torus (JET) neutron profile monitor ensures 2D coverage of the gamma and neutron emissive region that enables tomographic reconstruction. Due to the availability of only two projection angles and to the coarse sampling, tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET, but the problem of evaluating the errors associated with the reconstructed emissivity profile is still open. The reconstruction technique based on the maximum likelihood principle, that proved already to be a powerful tool for JET tomography, has been usedmore » to develop a method for the numerical evaluation of the statistical properties of the uncertainties in gamma and neutron emissivity reconstructions. The image covariance calculation takes into account the additional techniques introduced in the reconstruction process for tackling with the limited data set (projection resampling, smoothness regularization depending on magnetic field). The method has been validated by numerically simulations and applied to JET data. Different sources of artefacts that may significantly influence the quality of reconstructions and the accuracy of variance calculation have been identified.« less

  1. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    NASA Astrophysics Data System (ADS)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  2. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  3. Enhanced nearfield acoustic holography for larger distances of reconstructions using fixed parameter Tikhonov regularization

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.

    2016-07-07

    This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.

  4. Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun

    2018-07-01

    People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.

  5. Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin

    USGS Publications Warehouse

    Walker, J.F.; Osen, L.L.; Hughes, P.E.

    1987-01-01

    A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%. 

  6. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  7. Combination of Complex-Based and Magnitude-Based Multiecho Water-Fat Separation for Accurate Quantification of Fat-Fraction

    PubMed Central

    Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724

  8. Global distortion of GPS networks associated with satellite antenna model errors

    NASA Astrophysics Data System (ADS)

    Cardellach, E.; Elósegui, P.; Davis, J. L.

    2007-07-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.

  9. Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors

    NASA Technical Reports Server (NTRS)

    Cardellach, E.; Elosequi, P.; Davis, J. L.

    2007-01-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.

  10. Uncertainty of the peak flow reconstruction of the 1907 flood in the Ebro River in Xerta (NE Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Ruiz-Bellet, Josep Lluís; Castelltort, Xavier; Balasch, J. Carles; Tuset, Jordi

    2017-02-01

    There is no clear, unified and accepted method to estimate the uncertainty of hydraulic modelling results. In historical floods reconstruction, due to the lower precision of input data, the magnitude of this uncertainty could reach a high value. With the objectives of giving an estimate of the peak flow error of a typical historical flood reconstruction with the model HEC-RAS and of providing a quick, simple uncertainty assessment that an end user could easily apply, the uncertainty of the reconstructed peak flow of a major flood in the Ebro River (NE Iberian Peninsula) was calculated with a set of local sensitivity analyses on six input variables. The peak flow total error was estimated at ±31% and water height was found to be the most influential variable on peak flow, followed by Manning's n. However, the latter, due to its large uncertainty, was the greatest contributor to peak flow total error. Besides, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation; all three methods gave similar peak flows. Manning's equation gave almost the same result than HEC-RAS. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed.

  11. TH-AB-201-12: Using Machine Log-Files for Treatment Planning and Delivery QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanhope, C; Liang, J; Drake, D

    2016-06-15

    Purpose: To determine the segment reduction and dose resolution necessary for machine log-files to effectively replace current phantom-based patient-specific quality assurance, while minimizing computational cost. Methods: Elekta’s Log File Convertor R3.2 records linac delivery parameters (dose rate, gantry angle, leaf position) every 40ms. Five VMAT plans [4 H&N, 1 Pulsed Brain] comprised of 2 arcs each were delivered on the ArcCHECK phantom. Log-files were reconstructed in Pinnacle on the phantom geometry using 1/2/3/4° control point spacing and 2/3/4mm dose grid resolution. Reconstruction effectiveness was quantified by comparing 2%/2mm gamma passing rates of the original and log-file plans. Modulation complexity scoresmore » (MCS) were calculated for each beam to correlate reconstruction accuracy and beam modulation. Percent error in absolute dose for each plan-pair combination (log-file vs. ArcCHECK, original vs. ArcCHECK, log-file vs. original) was calculated for each arc and every diode greater than 10% of the maximum measured dose (per beam). Comparing standard deviations of the three plan-pair distributions, relative noise of the ArcCHECK and log-file systems was elucidated. Results: The original plans exhibit a mean passing rate of 95.1±1.3%. The eight more modulated H&N arcs [MCS=0.088±0.014] and two less modulated brain arcs [MCS=0.291±0.004] yielded log-file pass rates most similar to the original plan when using 1°/2mm [0.05%±1.3% lower] and 2°/3mm [0.35±0.64% higher] log-file reconstructions respectively. Log-file and original plans displayed percent diode dose errors 4.29±6.27% and 3.61±6.57% higher than measurement. Excluding the phantom eliminates diode miscalibration and setup errors; log-file dose errors were 0.72±3.06% higher than the original plans – significantly less noisy. Conclusion: For log-file reconstructed VMAT arcs, 1° control point spacing and 2mm dose resolution is recommended, however, less modulated arcs may allow less stringent reconstructions. Following the aforementioned reconstruction recommendations, the log-file technique is capable of detecting delivery errors with equivalent accuracy and less noise than ArcCHECK QA. I am funded by an Elekta Research Grant.« less

  12. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    PubMed Central

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  13. Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.

    PubMed

    Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris

    2017-09-18

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.

  14. A New CT Reconstruction Technique Using Adaptive Deformation Recovery and Intensity Correction (ADRIC)

    PubMed Central

    Zhang, You; Ma, Jianhua; Iyengar, Puneeth; Zhong, Yuncheng; Wang, Jing

    2017-01-01

    Purpose Sequential same-patient CT images may involve deformation-induced and non-deformation-induced voxel intensity changes. An adaptive deformation recovery and intensity correction (ADRIC) technique was developed to improve the CT reconstruction accuracy, and to separate deformation from non-deformation-induced voxel intensity changes between sequential CT images. Materials and Methods ADRIC views the new CT volume as a deformation of a prior high-quality CT volume, but with additional non-deformation-induced voxel intensity changes. ADRIC first applies the 2D-3D deformation technique to recover the deformation field between the prior CT volume and the new, to-be-reconstructed CT volume. Using the deformation-recovered new CT volume, ADRIC further corrects the non-deformation-induced voxel intensity changes with an updated algebraic reconstruction technique (‘ART-dTV’). The resulting intensity-corrected new CT volume is subsequently fed back into the 2D-3D deformation process to further correct the residual deformation errors, which forms an iterative loop. By ADRIC, the deformation field and the non-deformation voxel intensity corrections are optimized separately and alternately to reconstruct the final CT. CT myocardial perfusion imaging scenarios were employed to evaluate the efficacy of ADRIC, using both simulated data of the extended-cardiac-torso (XCAT) digital phantom and experimentally acquired porcine data. The reconstruction accuracy of the ADRIC technique was compared to the technique using ART-dTV alone, and to the technique using 2D-3D deformation alone. The relative error metric and the universal quality index metric are calculated between the images for quantitative analysis. The relative error is defined as the square root of the sum of squared voxel intensity differences between the reconstructed volume and the ‘ground-truth’ volume, normalized by the square root of the sum of squared ‘ground-truth’ voxel intensities. In addition to the XCAT and porcine studies, a physical lung phantom measurement study was also conducted. Water-filled balloons with various shapes/volumes and concentrations of iodinated contrasts were put inside the phantom to simulate both deformations and non-deformation-induced intensity changes for ADRIC reconstruction. The ADRIC-solved deformations and intensity changes from limited-view projections were compared to those of the ‘gold-standard’ volumes reconstructed from fully-sampled projections. Results For the XCAT simulation study, the relative errors of the reconstructed CT volume by the 2D-3D deformation technique, the ART-dTV technique and the ADRIC technique were 14.64%, 19.21% and 11.90% respectively, by using 20 projections for reconstruction. Using 60 projections for reconstruction reduced the relative errors to 12.33%, 11.04% and 7.92% for the three techniques, respectively. For the porcine study, the corresponding results were 13.61%, 8.78%, 6.80% by using 20 projections; and 12.14%, 6.91% and 5.29% by using 60 projections. The ADRIC technique also demonstrated robustness to varying projection exposure levels. For the physical phantom study, the average DICE coefficient between the initial prior balloon volume and the new ‘gold-standard’ balloon volumes was 0.460. ADRIC reconstruction by 21 projections increased the average DICE coefficient to 0.954. Conclusion The ADRIC technique outperformed both the 2D-3D deformation technique and the ART-dTV technique in reconstruction accuracy. The alternately solved deformation field and non-deformation voxel intensity corrections can benefit multiple clinical applications, including tumor tracking, radiotherapy dose accumulation and treatment outcome analysis. PMID:28380247

  15. Empirical investigation into depth-resolution of Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, N.; Ogaya, X.

    2017-12-01

    We investigate the depth-resolution of MT data comparing reconstructed 1D resistivity profiles with measured resistivity and lithostratigraphy from borehole data. Inversion of MT data has been widely used to reconstruct the 1D fine-layered resistivity structure beneath an isolated Magnetotelluric (MT) station. Uncorrelated noise is generally assumed to be associated to MT data. However, wrong assumptions on error statistics have been proved to strongly bias the results obtained in geophysical inversions. In particular the number of resolved layers at depth strongly depends on error statistics. In this study, we applied a trans-dimensional McMC algorithm for reconstructing the 1D resistivity profile near-by the location of a 1500 m-deep borehole, using MT data. We resolve the MT inverse problem imposing different models for the error statistics associated to the MT data. Following a Hierachical Bayes' approach, we also inverted for the hyper-parameters associated to each error statistics model. Preliminary results indicate that assuming un-correlated noise leads to a number of resolved layers larger than expected from the retrieved lithostratigraphy. Moreover, comparing the inversion of synthetic resistivity data obtained from the "true" resistivity stratification measured along the borehole shows that a consistent number of resistivity layers can be obtained using a Gaussian model for the error statistics, with substantial correlation length.

  16. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  17. Proprioception in patients with posterior cruciate ligament tears: A meta-analysis comparison of reconstructed and contralateral normal knees

    PubMed Central

    Ko, Seung-Nam

    2017-01-01

    Posterior cruciate ligament (PCL) reconstruction for patients with PCL insufficiency has been associated with postoperative improvements in proprioceptive function due to mechanoreceptor regeneration. However, it is unclear whether reconstructed PCL or contralateral normal knees have better proprioceptive function outcomes. This meta-analysis was designed to compare the proprioceptive function of reconstructed PCL or contralateral normal knees in patients with PCL insufficiency. All studies that compared proprioceptive function, as assessed with threshold to detect passive movement (TTDPM) or joint position sense (JPS) in PCL reconstructed or contralateral normal knees were included. JPS was calculated by reproducing passive positioning (RPP). Five studies met the inclusion/exclusion criteria for the meta-analysis. The proprioceptive function, defined as TTDPM (95% CI: 0.25 to 0.51°; P<0.00001) and RPP (95% CI: 0.19 to 0.45°; P<0.00001), was significantly different between the reconstructed PCL and contralateral normal knees. The mean difference in angle of error between the reconstructed PCL and contralateral normal knees was 0.06° greater in TTDPM than by RPP. In addition, results from subgroup analyses, based on the starting angles and the moving directions of the knee, that evaluated TTDPM at 15° flexion to 45° extension, TTDPM at 45° flexion to 110° flexion, RPP in flexion, and RPP in extension demonstrated that mean angles of error were significantly greater, by 0.38° (P = 0.0001), 0.36° (P = 0.02), 0.36° (P<0.00001), and 0.23° (P = 0.04), respectively, in reconstructed PCL than in contralateral normal knees. The proprioceptive function of PCL reconstructed knees was decreased, compared with contralateral normal knees, as determined by both TTDPM and RPP. In addition, the amount of loss of proprioception was greater in TTDPM than in RPP, even with minute differences. Results from subgroup analysis, that evaluated the mean angles of error in moving directions through RPP, suggested that the moving direction of flexion has a significantly greater mean for angles of error than the moving direction of extension. Although the level of differences between various parameters were statistically significant, further studies are needed to determine whether the small differences (>1°) of the loss of proprioception are clinically relevant. PMID:28922423

  18. Reconstructive Recall of Linguistic Style. Technical Report No. 286.

    ERIC Educational Resources Information Center

    Brewer, William F.; Hay, Anne E.

    A study investigated reconstructive recall for linguistic style. It was hypothesized that (1) features of linguistic style would be more difficult to recall than underlying content, (2) reconstructive errors would include stylistic forms recalled as standard forms when subjects lacked productive control of a particular feature of a style, and (3)…

  19. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  20. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  1. Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.

    PubMed

    de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2008-01-01

    The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).

  2. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  3. Contrast improvement of continuous wave diffuse optical tomography reconstruction by hybrid approach using least square and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Patra, Rusha; Dutta, Pranab K.

    2015-07-01

    Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.

  4. Hyper-X Mach 10 Trajectory Reconstruction

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Martin, John G.; Tartabini, Paul V.; Thornblom, Mark N.

    2005-01-01

    This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X-43A/Hyper-X high speed research vehicle, and its implementation for the reconstruction and analysis of flight test data. Extended Kalman filtering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the filtering routines. Additionally, smoothing algorithms have been implemented in which the final value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from data obtained during the Mach 10 test flight, which occurred on November 16th 2004.

  5. Reconstruction of lightning channel geometry by localizing thunder sources

    NASA Astrophysics Data System (ADS)

    Bodhika, J. A. P.; Dharmarathna, W. G. D.; Fernando, Mahendra; Cooray, Vernon

    2013-09-01

    Thunder is generated as a result of a shock wave created by sudden expansion of air in the lightning channel due to high temperature variations. Even though the highest amplitudes of thunder signatures are generated at the return stroke stage, thunder signals generated at other events such as preliminary breakdown pulses also can be of amplitudes which are large enough to record using a sensitive system. In this study, it was attempted to reconstruct the lightning channel geometry of cloud and ground flashes by locating the temporal and spatial variations of thunder sources. Six lightning flashes were reconstructed using the recorded thunder signatures. Possible effects due to atmospheric conditions were neglected. Numerical calculations suggest that the time resolution of the recorded signal and 10 ms-1error in speed of sound leads to 2% and 3% errors, respectively, in the calculated coordinates. Reconstructed channel geometries for cloud and ground flashes agreed with the visual observations. Results suggest that the lightning channel can be successfully reconstructed using this technique.

  6. A Regularized Volumetric Fusion Framework for Large-Scale 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Rajput, Asif; Funk, Eugen; Börner, Anko; Hellwich, Olaf

    2018-07-01

    Modern computational resources combined with low-cost depth sensing systems have enabled mobile robots to reconstruct 3D models of surrounding environments in real-time. Unfortunately, low-cost depth sensors are prone to produce undesirable estimation noise in depth measurements which result in either depth outliers or introduce surface deformations in the reconstructed model. Conventional 3D fusion frameworks integrate multiple error-prone depth measurements over time to reduce noise effects, therefore additional constraints such as steady sensor movement and high frame-rates are required for high quality 3D models. In this paper we propose a generic 3D fusion framework with controlled regularization parameter which inherently reduces noise at the time of data fusion. This allows the proposed framework to generate high quality 3D models without enforcing additional constraints. Evaluation of the reconstructed 3D models shows that the proposed framework outperforms state of art techniques in terms of both absolute reconstruction error and processing time.

  7. Unitary reconstruction of secret for stabilizer-based quantum secret sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    2017-08-01

    We propose a unitary procedure to reconstruct quantum secret for a quantum secret sharing scheme constructed from stabilizer quantum error-correcting codes. Erasure correcting procedures for stabilizer codes need to add missing shares for reconstruction of quantum secret, while unitary reconstruction procedures for certain class of quantum secret sharing are known to work without adding missing shares. The proposed procedure also works without adding missing shares.

  8. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  9. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  10. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  11. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  12. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions.

    PubMed

    Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram

    2014-04-01

    Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.

  13. Automated Reconstruction of Neural Trees Using Front Re-initialization

    PubMed Central

    Mukherjee, Amit; Stepanyants, Armen

    2013-01-01

    This paper proposes a greedy algorithm for automated reconstruction of neural arbors from light microscopy stacks of images. The algorithm is based on the minimum cost path method. While the minimum cost path, obtained using the Fast Marching Method, results in a trace with the least cumulative cost between the start and the end points, it is not sufficient for the reconstruction of neural trees. This is because sections of the minimum cost path can erroneously travel through the image background with undetectable detriment to the cumulative cost. To circumvent this problem we propose an algorithm that grows a neural tree from a specified root by iteratively re-initializing the Fast Marching fronts. The speed image used in the Fast Marching Method is generated by computing the average outward flux of the gradient vector flow field. Each iteration of the algorithm produces a candidate extension by allowing the front to travel a specified distance and then tracking from the farthest point of the front back to the tree. Robust likelihood ratio test is used to evaluate the quality of the candidate extension by comparing voxel intensities along the extension to those in the foreground and the background. The qualified extensions are appended to the current tree, the front is re-initialized, and Fast Marching is continued until the stopping criterion is met. To evaluate the performance of the algorithm we reconstructed 6 stacks of two-photon microscopy images and compared the results to the ground truth reconstructions by using the DIADEM metric. The average comparison score was 0.82 out of 1.0, which is on par with the performance achieved by expert manual tracers. PMID:24386539

  14. Accuracy of using computer-aided rapid prototyping templates for mandible reconstruction with an iliac crest graft

    PubMed Central

    2014-01-01

    Background This study aimed to evaluate the accuracy of surgical outcomes in free iliac crest mandibular reconstructions that were carried out with virtual surgical plans and rapid prototyping templates. Methods This study evaluated eight patients who underwent mandibular osteotomy and reconstruction with free iliac crest grafts using virtual surgical planning and designed guiding templates. Operations were performed using the prefabricated guiding templates. Postoperative three-dimensional computer models were overlaid and compared with the preoperatively designed models in the same coordinate system. Results Compared to the virtual osteotomy, the mean error of distance of the actual mandibular osteotomy was 2.06 ± 0.86 mm. When compared to the virtual harvested grafts, the mean error volume of the actual harvested grafts was 1412.22 ± 439.24 mm3 (9.12% ± 2.84%). The mean error between the volume of the actual harvested grafts and the shaped grafts was 2094.35 ± 929.12 mm3 (12.40% ± 5.50%). Conclusions The use of computer-aided rapid prototyping templates for virtual surgical planning appears to positively influence the accuracy of mandibular reconstruction. PMID:24957053

  15. Novel edge treatment method for improving the transmission reconstruction quality in Tomographic Gamma Scanning.

    PubMed

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2018-05-01

    Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  17. Acoustic Inverse Scattering for Breast Cancer Microcalcification Detection. Addendum

    DTIC Science & Technology

    2011-12-01

    the center. To conserve space, few are shown here. A graph comparing the spatial location and the error in reconstruction will follow...following graphs show the error in reconstruction as a function of position of the object along the x-axis, y-axis and the diagonal in the fourth quadrant of...the well-known Kirchhoff – Poisson formulas (see, e.g., Refs. [33,34]) allow one to rep- resent the solution p(x,t) in terms of the spherical means

  18. RANDOMIZED PROSPECTIVE STUDY ON TRAUMATIC PATELLAR DISLOCATION: CONSERVATIVE TREATMENT VERSUS RECONSTRUCTION OF THE MEDIAL PATELLOFEMORAL LIGAMENT USING THE PATELLAR TENDON, WITH A MINIMUM OF TWO YEARS OF FOLLOW-UP.

    PubMed

    Bitar, Alexandre Carneiro; D'Elia, Caio Oliveira; Demange, Marco Kawamura; Viegas, Alexandre Christo; Camanho, Gilberto Luis

    2011-01-01

    The aim of this study was to compare the surgical results from reconstruction of the medial patellofemoral ligament (MPFL) with non-operative treatment of primary patellar dislocation. Thirty-nine patients (41 knees) with patellar dislocation were randomized into two groups. One group was treated conservatively (immobilization and physiotherapy) and other was treated surgically with reconstruction of the MPFL, and the results were evaluated with a minimum follow-up of two years. The Kujala questionnaire was applied to assess pain and quality of life, and recurrences were evaluated. Pearson's chi-square test and Fisher's exact test were used in the statistical evaluation. The statistical analysis showed that the mean Kujala score was significantly lower in the conservative group (70.8), compared with the mean value in the surgical group (88.9), with p = 0.001. The surgical group presented a higher percentage of "good/excellent" Kujala score results (71.43%) than in the conservative group (25.0%), with p = 0.003. The conservative group presented a greater number of recurrences (35% of the cases), while in the surgical group there were no reports of recurrences and/or subluxation. Treatment with reconstruction of the medial patellofemoral ligament using the patellar tendon produced better results, based on the analysis of post-treatment recurrences and the better final results from the Kujala questionnaire after a minimum follow-up period of two years.

  19. Double-bundle PCL reconstruction using autologous hamstring tendons: outcome with a minimum 2-year follow-up.

    PubMed

    Cury, Ricardo de Paula Leite; Castro Filho, Rômulo Neves; Sadatsune, Daniel Akira; do Prado, Davi Ribeiro; Gonçalves, Ricardo José Peruzzo; Mestriner, Marcos Barbieri

    2017-01-01

    To present the outcomes of posterior cruciate ligament (PCL) double-bundle reconstruction using autologous hamstring tendons, with a minimum follow-up of two years. Evaluation of 16 cases of PCL injury that underwent double-bundle reconstruction with autogenous hamstring tendons, between 2011 and 2013. The final sample consisted of 16 patients, 15 men and one woman, with a mean age of 31 years (21-49). The predominant mechanism was motorcycle accident in half of the cases. There was a mean interval of 15 months between the time of lesion and the surgery (three to 52 months). Five lesions were isolated and 11, associated. Clinical evaluation, application of validated scores, and measurements with use of the KT-1000 were performed. The analysis showed a mean preoperative Lysholm score of 50 points (28-87), progressing to 94 points (85-100) postoperatively. The IKDC score also demonstrated improvement. In the preoperative evaluation, four and 12 patients were respectively classified as C (abnormal) and D (very unusual), and in the postoperative evaluation six as A (normal) and ten as B (close to normal). In the post-operative evaluation by KT1000 arthrometer, 13 patients showed difference between 0-2 mm and 3 between 3 and 5 mm, when compared with the contralateral side. Autologous hamstring tendons are a viable option in double-bundle reconstruction of the PCL, with good clinical results in a minimum follow-up of two years.

  20. Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT)

    DTIC Science & Technology

    2011-01-01

    Minimum Error Bounded Efficient `1 Tracker with Occlusion Detection Xue Mei\\ ∗ Haibin Ling† Yi Wu†[ Erik Blasch‡ Li Bai] \\Assembly Test Technology...proposed BPR-L1 tracker is tested on several challenging benchmark sequences involving chal- lenges such as occlusion and illumination changes. In all...point method de - pends on the value of the regularization parameter λ. In the experiments, we found that the total number of PCG is a few hundred. The

  1. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  2. Clinical and functional outcome of anterior cruciate ligament reconstruction in the recreational athlete over the age of 35.

    PubMed

    Novak, P J; Bach, B R; Hager, C A

    1996-01-01

    This study examined the functional, objective, and subjective outcome of anterior cruciate ligament (ACL) reconstruction in recreational athletes > or = 35 years after a minimum of 2 years of follow-up. Patients > or = 35 years who underwent ACL reconstruction by a single surgeon were identified from our surgical database. Nineteen knees in 18 (62% follow-up) patients were available for review by an independent examiner. The patients underwent physical examination, radiographs, functional testing, isokinetic strength testing, and instrumented ligament arthrometer testing. All were seen at a minimum of 2 years of follow-up. The average age was 40 years. Five of 19 underwent reconstruction less than 1 month after injury, and the remainder underwent reconstruction for chronic injuries. All patients preoperatively had at least a grade 2 Lachman and a positive pivot shift noted on physical examination. After a minimum of 2 years of follow-up, 17 of 18 patients had a stable knee on objective testing, including a negative Lachman and pivot shift. Seventeen patients (94%) had < 3 mm side-to-side difference on maximum manual arthrometric testing. Only one patient had > 3 cm prone heel height difference, and all patients had > 125 degrees of flexion. Mean thigh circumference difference was 0.5 cm. Isokinetic testing demonstrated a mean 11%, 7%, and 4% quadriceps asymmetry at 60 degrees, 180 degrees, and 240 degrees/second, respectively. However, functional testing revealed only a mean 6% asymmetry on vertical jump, single leg hop, and timed 6 meter hop. Seventeen of 18 patients were satisfied with their results. The mean postoperative Lysholm Rating Scale score was 93. The mean Noyes Sports Activity Scale score was 86, improved from 31 preoperatively. Thirteen of 18 returned to their preinjury level of sports performance. These results indicate that ACL reconstruction in patients over the age of 35 has functional, objective, and subjective results comparable to those of a younger patient population.

  3. Ultralow dose dentomaxillofacial CT imaging and iterative reconstruction techniques: variability of Hounsfield units and contrast-to-noise ratio

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2016-01-01

    Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336

  4. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  5. To image analysis in computed tomography

    NASA Astrophysics Data System (ADS)

    Chukalina, Marina; Nikolaev, Dmitry; Ingacheva, Anastasia; Buzmakov, Alexey; Yakimchuk, Ivan; Asadchikov, Victor

    2017-03-01

    The presence of errors in tomographic image may lead to misdiagnosis when computed tomography (CT) is used in medicine, or the wrong decision about parameters of technological processes when CT is used in the industrial applications. Two main reasons produce these errors. First, the errors occur on the step corresponding to the measurement, e.g. incorrect calibration and estimation of geometric parameters of the set-up. The second reason is the nature of the tomography reconstruction step. At the stage a mathematical model to calculate the projection data is created. Applied optimization and regularization methods along with their numerical implementations of the method chosen have their own specific errors. Nowadays, a lot of research teams try to analyze these errors and construct the relations between error sources. In this paper, we do not analyze the nature of the final error, but present a new approach for the calculation of its distribution in the reconstructed volume. We hope that the visualization of the error distribution will allow experts to clarify the medical report impression or expert summary given by them after analyzing of CT results. To illustrate the efficiency of the proposed approach we present both the simulation and real data processing results.

  6. Theoretical study on the laser-driven ion-beam trace probe in toroidal devices with large poloidal magnetic field

    NASA Astrophysics Data System (ADS)

    Yang, X.; Xiao, C.; Chen, Y.; Xu, T.; Yu, Y.; Xu, M.; Wang, L.; Wang, X.; Lin, C.

    2018-03-01

    Recently, a new diagnostic method, Laser-driven Ion-beam Trace Probe (LITP), has been proposed to reconstruct 2D profiles of the poloidal magnetic field (Bp) and radial electric field (Er) in the tokamak devices. A linear assumption and test particle model were used in those reconstructions. In some toroidal devices such as the spherical tokamak and the Reversal Field Pinch (RFP), Bp is not small enough to meet the linear assumption. In those cases, the error of reconstruction increases quickly when Bp is larger than 10% of the toroidal magnetic field (Bt), and the previous test particle model may cause large error in the tomography process. Here a nonlinear reconstruction method is proposed for those cases. Preliminary numerical results show that LITP could be applied not only in tokamak devices, but also in other toroidal devices, such as the spherical tokamak, RFP, etc.

  7. Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection

    DOE PAGES

    Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...

    2017-09-18

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  8. Fine-resolution imaging of solar features using Phase-Diverse Speckle

    NASA Technical Reports Server (NTRS)

    Paxman, Richard G.

    1995-01-01

    Phase-diverse speckle (PDS) is a novel imaging technique intended to overcome the degrading effects of atmospheric turbulence on fine-resolution imaging. As its name suggests, PDS is a blend of phase-diversity and speckle-imaging concepts. PDS reconstructions on solar data were validated by simulation, by demonstrating internal consistency of PDS estimates, and by comparing PDS reconstructions with those produced from well accepted speckle-imaging processing. Several sources of error in data collected with the Swedish Vacuum Solar Telescope (SVST) were simulated: CCD noise, quantization error, image misalignment, and defocus error, as well as atmospheric turbulence model error. The simulations demonstrate that fine-resolution information can be reliably recovered out to at least 70% of the diffraction limit without significant introduction of image artifacts. Additional confidence in the SVST restoration is obtained by comparing its spatial power spectrum with previously-published power spectra derived from both space-based images and earth-based images corrected with traditional speckle-imaging techniques; the shape of the spectrum is found to match well the previous measurements. In addition, the imagery is found to be consistent with, but slightly sharper than, imagery reconstructed with accepted speckle-imaging techniques.

  9. Waffle mode error in the AEOS adaptive optics point-spread function

    NASA Astrophysics Data System (ADS)

    Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.

    2003-02-01

    Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.

  10. Superficial vessel reconstruction with a multiview camera system

    PubMed Central

    Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1  mm. PMID:26759814

  11. Optimal noise reduction in 3D reconstructions of single particles using a volume-normalized filter

    PubMed Central

    Sindelar, Charles V.; Grigorieff, Nikolaus

    2012-01-01

    The high noise level found in single-particle electron cryo-microscopy (cryo-EM) image data presents a special challenge for three-dimensional (3D) reconstruction of the imaged molecules. The spectral signal-to-noise ratio (SSNR) and related Fourier shell correlation (FSC) functions are commonly used to assess and mitigate the noise-generated error in the reconstruction. Calculation of the SSNR and FSC usually includes the noise in the solvent region surrounding the particle and therefore does not accurately reflect the signal in the particle density itself. Here we show that the SSNR in a reconstructed 3D particle map is linearly proportional to the fractional volume occupied by the particle. Using this relationship, we devise a novel filter (the “single-particle Wiener filter”) to minimize the error in a reconstructed particle map, if the particle volume is known. Moreover, we show how to approximate this filter even when the volume of the particle is not known, by optimizing the signal within a representative interior region of the particle. We show that the new filter improves on previously proposed error-reduction schemes, including the conventional Wiener filter as well as figure-of-merit weighting, and quantify the relationship between all of these methods by theoretical analysis as well as numeric evaluation of both simulated and experimentally collected data. The single-particle Wiener filter is applicable across a broad range of existing 3D reconstruction techniques, but is particularly well suited to the Fourier inversion method, leading to an efficient and accurate implementation. PMID:22613568

  12. Dual energy approach for cone beam artifacts correction

    NASA Astrophysics Data System (ADS)

    Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk

    2017-03-01

    Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.

  13. Minimal entropy reconstructions of thermal images for emissivity correction

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.

    1999-03-01

    Low emissivity with corresponding low thermal emission is a problem which has long afflicted infrared thermography. The problem is aggravated by reflected thermal energy which increases as the emissivity decreases, thus reducing the net signal-to-noise ratio, which degrades the resulting temperature reconstructions. Additional errors are introduced from the traditional emissivity-correction approaches, wherein one attempts to correct for emissivity either using thermocouples or using one or more baseline images, collected at known temperatures. These corrections are numerically equivalent to image differencing. Errors in the baseline images are therefore additive, causing the resulting measurement error to either double or triple. The practical application of thermal imagery usually entails coating the objective surface to increase the emissivity to a uniform and repeatable value. While the author recommends that the thermographer still adhere to this practice, he has devised a minimal entropy reconstructions which not only correct for emissivity variations, but also corrects for variations in sensor response, using the baseline images at known temperatures to correct for these values. The minimal energy reconstruction is actually based on a modified Hopfield neural network which finds the resulting image which best explains the observed data and baseline data, having minimal entropy change between adjacent pixels. The autocorrelation of temperatures between adjacent pixels is a feature of most close-up thermal images. A surprising result from transient heating data indicates that the resulting corrected thermal images have less measurement error and are closer to the situational truth than the original data.

  14. A suite of global reconstructed precipitation products and their error estimate by multivariate regression using empirical orthogonal functions: 1850-present

    NASA Astrophysics Data System (ADS)

    Shen, S. S.

    2014-12-01

    This presentation describes a suite of global precipitation products reconstructed by a multivariate regression method using an empirical orthogonal function (EOF) expansion. The sampling errors of the reconstruction are estimated for each product datum entry. The maximum temporal coverage is 1850-present and the spatial coverage is quasi-global (75S, 75N). The temporal resolution ranges from 5-day, monthly, to seasonal and annual. The Global Precipitation Climatology Project (GPCP) precipitation data from 1979-2008 are used to calculate the EOFs. The Global Historical Climatology Network (GHCN) gridded data are used to calculate the regression coefficients for reconstructions. The sampling errors of the reconstruction are analyzed in detail for different EOF modes. Our reconstructed 1900-2011 time series of the global average annual precipitation shows a 0.024 (mm/day)/100a trend, which is very close to the trend derived from the mean of 25 models of the CMIP5 (Coupled Model Intercomparison Project Phase 5). Our reconstruction examples of 1983 El Niño precipitation and 1917 La Niña precipitation (Figure 1) demonstrate that the El Niño and La Niña precipitation patterns are well reflected in the first two EOFs. The validation of our reconstruction results with GPCP makes it possible to use the reconstruction as the benchmark data for climate models. This will help the climate modeling community to improve model precipitation mechanisms and reduce the systematic difference between observed global precipitation, which hovers at around 2.7 mm/day for reconstructions and GPCP, and model precipitations, which have a range of 2.6-3.3 mm/day for CMIP5. Our precipitation products are publically available online, including digital data, precipitation animations, computer codes, readme files, and the user manual. This work is a joint effort between San Diego State University (Sam Shen, Nancy Tafolla, Barbara Sperberg, and Melanie Thorn) and University of Maryland (Phil Arkin, Tom Smith, Li Ren, and Li Dai) and supported in part by the U.S. National Science Foundation (Awards No. AGS-1015926 and AGS-1015957).

  15. Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.

    PubMed

    Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris

    2014-08-01

    The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.

  16. Experimental scheme and restoration algorithm of block compression sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  17. MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, E.D.; Glide-Hurst, C.; Wayne State University, Detroit, MI

    2016-06-15

    Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveformmore » exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient-specific velocity-based 4DCT reconstruction is feasible. Image noise was reduced with AVB, suggesting potential applications for low-dose acquisitions and to improve 4DCT reconstruction for irregular breathing patients. The submitting institution holds research agreements with Philips Healthcare.« less

  18. Stitching-error reduction in gratings by shot-shifted electron-beam lithography

    NASA Technical Reports Server (NTRS)

    Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.

    2001-01-01

    Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.

  19. Fast calculation of tissue optical properties using MC and the experimental evaluation for diagnosis of cervical cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Zhou, Xiaoqing; Qin, Zhuanping; Zhao, Huijuan

    2011-02-01

    This article aims at the development of the fast inverse Monte Carlo (MC) simulation for the reconstruction of optical properties (absorption coefficient μs and scattering coefficient μs) of cylindrical tissue, such as a cervix, from the measurement of near infrared diffuse light on frequency domain. Frequency domain information (amplitude and phase) is extracted from the time domain MC with a modified method. To shorten the computation time in reconstruction of optical properties, efficient and fast forward MC has to be achieved. To do this, firstly, databases of the frequency-domain information under a range of μa and μs were pre-built by combining MC simulation with Lambert-Beer's law. Then, a double polynomial model was adopted to quickly obtain the frequency-domain information in any optical properties. Based on the fast forward MC, the optical properties can be quickly obtained in a nonlinear optimization scheme. Reconstruction resulting from simulated data showed that the developed inverse MC method has the advantages in both the reconstruction accuracy and computation time. The relative errors in reconstruction of the μs and μs are less than +/-6% and +/-12% respectively, while another coefficient (μs or μs) is in a fixed value. When both μs and μs are unknown, the relative errors in reconstruction of the reduced scattering coefficient and absorption coefficient are mainly less than +/-10% in range of 45< μs <80 cm-1 and 0.25< a μ <0.55 cm-1. With the rapid reconstruction strategy developed in this article the computation time for reconstructing one set of the optical properties is less than 0.5 second. Endoscopic measurement on two tubular solid phantoms were also carried out to evaluate the system and the inversion scheme. The results demonstrated that less than 20% relative error can be achieved.

  20. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  1. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less

  2. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    NASA Astrophysics Data System (ADS)

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-01

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  3. Large radius of curvature measurement based on the evaluation of interferogram-quality metric in non-null interferometry

    NASA Astrophysics Data System (ADS)

    Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan

    2018-03-01

    Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.

  4. Intrinsic coincident full-Stokes polarimeter using stacked organic photovoltaics.

    PubMed

    Yang, Ruonan; Sen, Pratik; O'Connor, B T; Kudenov, M W

    2017-02-20

    An intrinsic coincident full-Stokes polarimeter is demonstrated by using strain-aligned polymer-based organic photovoltaics (OPVs) that can preferentially absorb certain polarized states of incident light. The photovoltaic-based polarimeter is capable of measuring four Stokes parameters by cascading four semitransparent OPVs in series along the same optical axis. This in-line polarimeter concept potentially ensures high temporal and spatial resolution with higher radiometric efficiency as compared to the existing polarimeter architecture. Two wave plates were incorporated into the system to modulate the S3 Stokes parameter so as to reduce the condition number of the measurement matrix and maximize the measured signal-to-noise ratio. Radiometric calibration was carried out to determine the measurement matrix. The polarimeter presented in this paper demonstrated an average RMS error of 0.84% for reconstructed Stokes vectors after normalized to S0. A theoretical analysis of the minimum condition number of the four-cell OPV design showed that for individually optimized OPV cells, a condition number of 2.4 is possible.

  5. Contextual Advantage for State Discrimination

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.

    2018-02-01

    Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.

  6. Optimized tomography of continuous variable systems using excitation counting

    NASA Astrophysics Data System (ADS)

    Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang

    2016-11-01

    We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.

  7. Textureless Macula Swelling Detection with Multiple Retinal Fundus Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul

    2010-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyse the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. Wemore » also present automatic algorithms to measure features from the reconstructed image which are useful in Point-of-Care automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalises the image; second, all available views are registered using non-morphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naiveheight- map of the macula. Results are presented on three sets of synthetic images and two sets of real world images. These preliminary tests show the ability to infer a minimum swelling of 300 microns and to correlate the reconstruction with the swollen location.« less

  8. Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins.

    PubMed

    Nguyen, Bao Linh; Pettitt, B Montgomery

    2015-04-14

    The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations.

  9. Uncertainty based pressure reconstruction from velocity measurement with generalized least squares

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos

    2017-11-01

    A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.

  10. Pan-sharpening via compressed superresolution reconstruction and multidictionary learning

    NASA Astrophysics Data System (ADS)

    Shi, Cheng; Liu, Fang; Li, Lingling; Jiao, Licheng; Hao, Hongxia; Shang, Ronghua; Li, Yangyang

    2018-01-01

    In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.

  11. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  12. Network reconstruction via graph blending

    NASA Astrophysics Data System (ADS)

    Estrada, Rolando

    2016-05-01

    Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.

  13. Reconstructing past sea ice cover of the Northern Hemisphere from dinocyst assemblages: status of the approach

    NASA Astrophysics Data System (ADS)

    de Vernal, Anne; Rochon, André; Fréchette, Bianca; Henry, Maryse; Radi, Taoufik; Solignac, Sandrine

    2013-11-01

    Dinocysts occur in a wide range of environmental conditions, including polar areas. We review here their use for the reconstruction of paleo sea ice cover in such environments. In the Arctic Ocean and subarctic seas characterized by dense sea ice cover, Islandinium minutum, Islandinium? cezare, Echinidinium karaense, Polykrikos sp. var. Arctic, Spiniferites elongatus-frigidus and Impagidinium pallidum are common and often occur with more cosmopolitan taxa such as Operculodinium centrocarpum sensu Wall & Dale, cyst of Pentapharsodinium dalei and Brigantedinium spp. Canonical correspondence analyses conducted on dinocyst assemblages illustrate relationships with sea surface parameters such as salinity, temperature, and sea ice cover. The application of the modern analogue technique permits quantitative reconstruction of past sea ice cover, which is expressed in terms of seasonal extent of sea ice cover (months per year with more than 50% of sea ice concentration) or mean annual sea ice concentration (in tenths). The accuracy of reconstructions or root mean square error of prediction (RMSEP) is ±1.1 over 10, which corresponds to perennial sea ice. Such an error is close to the interannual variability (standard deviation) of observed sea ice cover. Mismatch between the time interval of instrumental data used as reference (1953-2000) and the time interval represented by dinocyst populations in surface sediment samples, which may cover decades if not centuries, is another source of error. Despite uncertainties, dinocyst assemblages are useful for making quantitative reconstruction of seasonal sea ice cover.

  14. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  15. VizieR Online Data Catalog: Evolution of solar irradiance during Holocene (Vieira+, 2011)

    NASA Astrophysics Data System (ADS)

    Vieira, L. E. A.; Solanki, S. K.; Krivova, N. A.; Usoskin, I.

    2011-05-01

    This is a composite total solar irradiance (TSI) time series for 9495BC to 2007AD constructed as described in Sect. 3.3 of the paper. Since the TSI is the main external heat input into the Earth's climate system, a consistent record covering as long period as possible is needed for climate models. This was our main motivation for constructing this composite TSI time series. In order to produce a representative time series, we divided the Holocene into four periods according to the available data for each period. Table 4 (see below) summarizes the periods considered and the models available for each period. After the end of the Maunder Minimum we compute daily values, while prior to the end of the Maunder Minimum we compute 10-year averages. For the period for which both solar disk magnetograms and continuum images are available (period 1) we employ the SATIRE-S reconstruction (Krivova et al. 2003A&A...399L...1K; Wenzler et al. 2006A&A...460..583W). SATIRE-T (Krivova et al. 2010JGRA..11512112K) reconstruction is used from the beginning of the Maunder Minimum (approximately 1640AD) to 1977AD. Prior to 1640AD reconstructions are based on cosmogenic isotopes (this paper). Different models of the Earth's geomagnetic field are available before and after approximately 5000BC. Therefore we treat periods 3 and 4 (before and after 5000BC) separately. Further details can be found in the paper. We emphasize that the reconstructions based on different proxies have different time resolutions. (1 data file).

  16. Force and Directional Force Modulation Effects on Accuracy and Variability in Low-Level Pinch Force Tracking.

    PubMed

    Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence

    2018-01-01

    The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.

  17. Radiographic Findings in Revision Anterior Cruciate Ligament Reconstructions from the MARS Cohort

    PubMed Central

    2013-01-01

    The Multicenter ACL (anterior cruciate ligament) Revision Study (MARS) group was developed to investigate revision ACL reconstruction outcomes. An important part of this is obtaining and reviewing radiographic studies. The goal for this radiographic analysis is to establish radiographic findings for a large revision ACL cohort to allow comparison with future studies. The study was designed as a cohort study. Various established radiographic parameters were measured by three readers. These included sagittal and coronal femoral and tibial tunnel position, joint space narrowing, and leg alignment. Inter- and intraobserver comparisons were performed. Femoral sagittal position demonstrated 42% were more than 40% anterior to the posterior cortex. On the sagittal tibia tunnel position, 49% demonstrated some impingement on full-extension lateral radiographs. Limb alignment averaged 43% medial to the medial edge of the tibial plateau. On the Rosenberg view (45-degree flexion view), the minimum joint space in the medial compartment averaged 106% of the opposite knee, but it ranged down to a minimum of 4.6%. Lateral compartment narrowing at its minimum on the Rosenberg view averaged 91.2% of the opposite knee, but it ranged down to a minimum of 0.0%. On the coronal view, verticality as measured by the angle from the center of the tibial tunnel aperture to the center of the femoral tunnel aperture measured 15.8 degree ± 6.9% from vertical. This study represents the radiographic findings in the largest revision ACL reconstruction series ever assembled. Findings were generally consistent with those previously demonstrated in the literature. PMID:23404491

  18. A Tree-Ring Temperature Reconstruction from the Wrangell Mountains, Alaska (1593-1992): Evidence for Pronounced Regional Cooling During the Maunder Minimum

    NASA Astrophysics Data System (ADS)

    DArrigo, R.; Davi, N.; Jacoby, G.; Wiles, G.

    2002-05-01

    The Maunder Minimum interval (from the mid-1600s-early 1700s) is believed to have been one of the coldest periods of the past thousand years in the Northern Hemisphere. A maximum latewood density temperature reconstruction for the Wrangell Mountains, southern Alaska (1593-1992) provides information on regional temperature change during the Maunder Minimum and other periods of severe cold over the past four centuries. The Wrangell density record, which reflects warm season (July-September) temperatures, shows an overall cooling over the Maunder Minimum period with annual values reaching as low as -1.8oC below the long-term mean. Ring widths, which can integrate annual as well as summer conditions, also show pronounced cooling at the Wrangell site during this time, as do Arctic and hemispheric-scale temperature reconstructions based on tree rings and other proxy data. Maximum ages of glacial advance based on kill dates from overrun logs (which reflect cooler temperatures) coincide temporally with the cooling seen in the density and ring width records. In contrast, a recent modeling study indicates that during this period there was cold season (November-April) warming over much of Alaska, but cooling over other northern continental regions, as a result of decreased solar irradiance initiating low Arctic Oscillation index conditions. The influence of other forcings on Alaskan climate, the absence of ocean dynamical feedbacks in the model, and the different seasonality represented by the model and the trees may be some of the possible explanations for the different model and proxy results.

  19. Validation of the Kp Geomagnetic Index Forecast at CCMC

    NASA Astrophysics Data System (ADS)

    Frechette, B. P.; Mays, M. L.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.

  20. Image-based metal artifact reduction in x-ray computed tomography utilizing local anatomical similarity

    NASA Astrophysics Data System (ADS)

    Dong, Xue; Yang, Xiaofeng; Rosenfield, Jonathan; Elder, Eric; Dhabaan, Anees

    2017-03-01

    X-ray computed tomography (CT) is widely used in radiation therapy treatment planning in recent years. However, metal implants such as dental fillings and hip prostheses can cause severe bright and dark streaking artifacts in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. In this work, a metal artifact reduction method is proposed based on the intrinsic anatomical similarity between neighboring CT slices. Neighboring CT slices from the same patient exhibit similar anatomical features. Exploiting this anatomical similarity, a gamma map is calculated as a weighted summation of relative HU error and distance error for each pixel in an artifact-corrupted CT image relative to a neighboring, artifactfree image. The minimum value in the gamma map for each pixel is used to identify an appropriate pixel from the artifact-free CT slice to replace the corresponding artifact-corrupted pixel. With the proposed method, the mean CT HU error was reduced from 360 HU and 460 HU to 24 HU and 34 HU on head and pelvis CT images, respectively. Dose calculation accuracy also improved, as the dose difference was reduced from greater than 20% to less than 4%. Using 3%/3mm criteria, the gamma analysis failure rate was reduced from 23.25% to 0.02%. An image-based metal artifact reduction method is proposed that replaces corrupted image pixels with pixels from neighboring CT slices free of metal artifacts. This method is shown to be capable of suppressing streaking artifacts, thereby improving HU and dose calculation accuracy.

  1. Towards Better Calibration of Modern Palynological Data against Climate: A Case Study in Osaka Bay, Japan

    NASA Astrophysics Data System (ADS)

    Kitaba, I.; Nakagawa, T.; McClymont, E.; Dettman, D. L.; Yamada, K.; Takemura, K.; Hyodo, M.

    2014-12-01

    Many of the difficulties in the pollen fossil-based paleoclimate reconstruction in coastal regions derive from the complex sedimentary processes of the near-shore environment. In order to examine this problem, we carried out pollen analysis of surface sediments collected from 35 sites in Osaka Bay, Japan. Using the biomisation method, the surrounding vegetation was accurately reconstructed at all sites. Applying the modern analogue technique to the same data, however, led to reconstructed temperatures that were lower by ca. 5 deg. C and precipitation amounts higher by ca. 5000 mm than the current sea level climate of the region. The range of reconstructed values was larger than the reconstruction error associated with the method. The principal component analysis shows that the surface pollen variation in Osaka Bay reflects sedimentary processes. This significant error associated with the quantitative climatic reconstruction using pollen data is attributed to the fact that the pollen assemblage is not determined solely by climate but reflects non-climatic influences. The accuracy and precision of climatic reconstruction can be improved significantly by expanding counts of minor taxa. Given this result, we re-examined the reconstructed climate using Osaka Bay palynological record reported in Kitaba et al. (2013). This new method did not significantly alter the overall variation in the reconstructed climate, and thus we conclude that the reconstruction was generally reliable. However, some intervals were strongly affected by depositional environmental change. In these, a climate signal can be extracted by excluding the patterns that arise from coastal sedimentation.

  2. Stack Number Influence on the Accuracy of Aster Gdem (V2)

    NASA Astrophysics Data System (ADS)

    Mirzadeh, S. M. J.; Alizadeh Naeini, A.; Fatemi, S. B.

    2017-09-01

    In this research, the influence of stack number (STKN) on the accuracy of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM (GDEM) has been investigated. For this purpose, two data sets of ASTER and Reference DEMs from two study areas with various topography (Bomehen and Tazehabad) were used. The Results show that in both study areas, STKN of 19 results in minimum error so that this minimum error has small difference with other STKN. The analysis of slope, STKN, and error values shows that there is no strong correlation between these parameters in both study areas. For example, the value of mean absolute error increase by changing the topography and the increase of slope values and height on cells but, the changes in STKN has no important effect on error values. Furthermore, according to high values of STKN, effect of slope on elevation accuracy has practically decreased. Also, there is no great correlation between the residual and STKN in ASTER GDEM.

  3. Characterization of the International Linear Collider damping ring optics

    NASA Astrophysics Data System (ADS)

    Shanks, J.; Rubin, D. L.; Sagan, D.

    2014-10-01

    A method is presented for characterizing the emittance dilution and dynamic aperture for an arbitrary closed lattice that includes guide field magnet errors, multipole errors and misalignments. This method, developed and tested at the Cornell Electron Storage Ring Test Accelerator (CesrTA), has been applied to the damping ring lattice for the International Linear Collider (ILC). The effectiveness of beam based emittance tuning is limited by beam position monitor (BPM) measurement errors, number of corrector magnets and their placement, and correction algorithm. The specifications for damping ring magnet alignment, multipole errors, number of BPMs, and precision in BPM measurements are shown to be consistent with the required emittances and dynamic aperture. The methodology is then used to determine the minimum number of position monitors that is required to achieve the emittance targets, and how that minimum depends on the location of the BPMs. Similarly, the maximum tolerable multipole errors are evaluated. Finally, the robustness of each BPM configuration with respect to random failures is explored.

  4. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    PubMed

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  5. Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.

    PubMed

    Mehranian, Abolfazl; Zaidi, Habib

    2015-04-01

    Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  6. Precise signal amplitude retrieval for a non-homogeneous diagnostic beam using complex interferometry approach

    NASA Astrophysics Data System (ADS)

    Krupka, M.; Kalal, M.; Dostal, J.; Dudzak, R.; Juha, L.

    2017-08-01

    Classical interferometry became widely used method of active optical diagnostics. Its more advanced version, allowing reconstruction of three sets of data from just one especially designed interferogram (so called complex interferogram) was developed in the past and became known as complex interferometry. Along with the phase shift, which can be also retrieved using classical interferometry, the amplitude modifications of the probing part of the diagnostic beam caused by the object under study (to be called the signal amplitude) as well as the contrast of the interference fringes can be retrieved using the complex interferometry approach. In order to partially compensate for errors in the reconstruction due to imperfections in the diagnostic beam intensity structure as well as for errors caused by a non-ideal optical setup of the interferometer itself (including the quality of its optical components), a reference interferogram can be put to a good use. This method of interferogram analysis of experimental data has been successfully implemented in practice. However, in majority of interferometer setups (especially in the case of the ones employing the wavefront division) the probe and the reference part of the diagnostic beam would feature different intensity distributions over their respective cross sections. This introduces additional error into the reconstruction of the signal amplitude and the fringe contrast, which cannot be resolved using the reference interferogram only. In order to deal with this error it was found that additional separately recorded images of the intensity distribution of the probe and the reference part of the diagnostic beam (with no signal present) are needed. For the best results a sufficient shot-to-shot stability of the whole diagnostic system is required. In this paper, efficiency of the complex interferometry approach for obtaining the highest possible accuracy of the signal amplitude reconstruction is verified using the computer generated complex and reference interferograms containing artificially introduced intensity variations in the probe and the reference part of the diagnostic beam. These sets of data are subsequently analyzed and the errors of the signal amplitude reconstruction are evaluated.

  7. Accelerated Brain DCE-MRI Using Iterative Reconstruction With Total Generalized Variation Penalty for Quantitative Pharmacokinetic Analysis: A Feasibility Study.

    PubMed

    Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng

    2017-08-01

    To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.

  8. Automatic segmentation and reconstruction of the cortex from neonatal MRI.

    PubMed

    Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Joseph V

    2007-11-15

    Segmentation and reconstruction of cortical surfaces from magnetic resonance (MR) images are more challenging for developing neonates than adults. This is mainly due to the dynamic changes in the contrast between gray matter (GM) and white matter (WM) in both T1- and T2-weighted images (T1w and T2w) during brain maturation. In particular in neonatal T2w images WM typically has higher signal intensity than GM. This causes mislabeled voxels during cortical segmentation, especially in the cortical regions of the brain and in particular at the interface between GM and cerebrospinal fluid (CSF). We propose an automatic segmentation algorithm detecting these mislabeled voxels and correcting errors caused by partial volume effects. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic expectation maximization (EM) scheme. Quantitative validation against manual segmentation demonstrates good performance (the mean Dice value: 0.758+/-0.037 for GM and 0.794+/-0.078 for WM). The inner, central and outer cortical surfaces are then reconstructed using implicit surface evolution. A landmark study is performed to verify the accuracy of the reconstructed cortex (the mean surface reconstruction error: 0.73 mm for inner surface and 0.63 mm for the outer). Both segmentation and reconstruction have been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. This preliminary analysis confirms previous findings that cortical surface area and curvature increase with age, and that surface area scales to cerebral volume according to a power law, while cortical thickness is not related to age or brain growth.

  9. Frequency-domain optical tomographic image reconstruction algorithm with the simplified spherical harmonics (SP3) light propagation model.

    PubMed

    Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H

    2017-06-01

    We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .

  10. Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.

    2012-01-01

    Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906

  11. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  12. Rigorous accuracy assessment for 3D reconstruction using time-series Dual Fluoroscopy (DF) image pairs

    NASA Astrophysics Data System (ADS)

    Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet

    2017-06-01

    High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).

  13. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  14. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    PubMed Central

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  15. Effective learning strategies for real-time image-guided adaptive control of multiple-source hyperthermia applicators.

    PubMed

    Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva

    2010-03-01

    This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.

  16. Ghost hunting—an assessment of ghost particle detection and removal methods for tomographic-PIV

    NASA Astrophysics Data System (ADS)

    Elsinga, G. E.; Tokgoz, S.

    2014-08-01

    This paper discusses and compares several methods, which aim to remove spurious peaks, i.e. ghost particles, from the volume intensity reconstruction in tomographic-PIV. The assessment is based on numerical simulations of time-resolved tomographic-PIV experiments in linear shear flows. Within the reconstructed volumes, intensity peaks are detected and tracked over time. These peaks are associated with particles (either ghosts or actual particles) and are characterized by their peak intensity, size and track length. Peak intensity and track length are found to be effective in discriminating between most ghosts and the actual particles, although not all ghosts can be detected using only a single threshold. The size of the reconstructed particles does not reveal an important difference between ghosts and actual particles. The joint distribution of peak intensity and track length however does, under certain conditions, allow a complete separation of ghosts and actual particles. The ghosts can have either a high intensity or a long track length, but not both combined, like all the actual particles. Removing the detected ghosts from the reconstructed volume and performing additional MART iterations can decrease the particle position error at low to moderate seeding densities, but increases the position error, velocity error and tracking errors at higher densities. The observed trends in the joint distribution of peak intensity and track length are confirmed by results from a real experiment in laminar Taylor-Couette flow. This diagnostic plot allows an estimate of the number of ghosts that are indistinguishable from the actual particles.

  17. Simplified Approach Charts Improve Data Retrieval Performance

    PubMed Central

    Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.

    2016-01-01

    The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009

  18. Optimal plane search method in blood flow measurements by magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Bargiel, Pawel; Orkisz, Maciej; Przelaskowski, Artur; Piatkowska-Janko, Ewa; Bogorodzki, Piotr; Wolak, Tomasz

    2004-07-01

    This paper offers an algorithm for determining the blood flow parameters in the neck vessel segments using a single (optimal) measurement plane instead of the usual approach involving four planes orthogonal to the artery axis. This new approach aims at significantly shortening the time required to complete measurements using Nuclear Magnetic Resonance techniques. Based on a defined error function, the algorithm scans the solution space to find the minimum of the error function, and thus to determine a single plane characterized by a minimum measurement error, which allows for an accurate measurement of blood flow in the four carotid arteries. The paper also comprises a practical implementation of this method (as a module of a larger imaging-measuring system), including preliminary research results.

  19. Methods for calculating the electrode position Jacobian for impedance imaging.

    PubMed

    Boyle, A; Crabb, M G; Jehl, M; Lionheart, W R B; Adler, A

    2017-03-01

    Electrical impedance tomography (EIT) or electrical resistivity tomography (ERT) current and measure voltages at the boundary of a domain through electrodes. The movement or incorrect placement of electrodes may lead to modelling errors that result in significant reconstructed image artifacts. These errors may be accounted for by allowing for electrode position estimates in the model. Movement may be reconstructed through a first-order approximation, the electrode position Jacobian. A reconstruction that incorporates electrode position estimates and conductivity can significantly reduce image artifacts. Conversely, if electrode position is ignored it can be difficult to distinguish true conductivity changes from reconstruction artifacts which may increase the risk of a flawed interpretation. In this work, we aim to determine the fastest, most accurate approach for estimating the electrode position Jacobian. Four methods of calculating the electrode position Jacobian were evaluated on a homogeneous halfspace. Results show that Fréchet derivative and rank-one update methods are competitive in computational efficiency but achieve different solutions for certain values of contact impedance and mesh density.

  20. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  1. IPET and FETR: Experimental Approach for Studying Molecular Structure Dynamics by Cryo-Electron Tomography of a Single-Molecule Structure

    PubMed Central

    Zhang, Lei; Ren, Gang

    2012-01-01

    The dynamic personalities and structural heterogeneity of proteins are essential for proper functioning. Structural determination of dynamic/heterogeneous proteins is limited by conventional approaches of X-ray and electron microscopy (EM) of single-particle reconstruction that require an average from thousands to millions different molecules. Cryo-electron tomography (cryoET) is an approach to determine three-dimensional (3D) reconstruction of a single and unique biological object such as bacteria and cells, by imaging the object from a series of tilting angles. However, cconventional reconstruction methods use large-size whole-micrographs that are limited by reconstruction resolution (lower than 20 Å), especially for small and low-symmetric molecule (<400 kDa). In this study, we demonstrated the adverse effects from image distortion and the measuring tilt-errors (including tilt-axis and tilt-angle errors) both play a major role in limiting the reconstruction resolution. Therefore, we developed a “focused electron tomography reconstruction” (FETR) algorithm to improve the resolution by decreasing the reconstructing image size so that it contains only a single-instance protein. FETR can tolerate certain levels of image-distortion and measuring tilt-errors, and can also precisely determine the translational parameters via an iterative refinement process that contains a series of automatically generated dynamic filters and masks. To describe this method, a set of simulated cryoET images was employed; to validate this approach, the real experimental images from negative-staining and cryoET were used. Since this approach can obtain the structure of a single-instance molecule/particle, we named it individual-particle electron tomography (IPET) as a new robust strategy/approach that does not require a pre-given initial model, class averaging of multiple molecules or an extended ordered lattice, but can tolerate small tilt-errors for high-resolution single “snapshot” molecule structure determination. Thus, FETR/IPET provides a completely new opportunity for a single-molecule structure determination, and could be used to study the dynamic character and equilibrium fluctuation of macromolecules. PMID:22291925

  2. Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.

    PubMed

    Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo

    2018-01-01

    The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyata, Y.; Suzuki, T.; Takechi, M.

    2015-07-15

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichletmore » and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.« less

  4. Analysis of pitching velocity in major league baseball players before and after ulnar collateral ligament reconstruction.

    PubMed

    Jiang, Jimmy J; Leland, J Martin

    2014-04-01

    Ulnar collateral ligament (UCL) reconstructions are relatively common among professional pitchers in Major League Baseball (MLB). To the authors' knowledge, there has not been a study specifically analyzing pitching velocity after UCL surgery. These measurements were examined in a cohort of MLB pitchers before and after UCL reconstruction. There is no significant loss in pitch velocity after UCL reconstruction in MLB pitchers. Cohort study; Level of evidence, 3. Between the years 2008 to 2010, a total of 41 MLB pitchers were identified as players who underwent UCL reconstruction. Inclusion criteria for this study consisted of a minimum of 1 year of preinjury and 2 years of postinjury pitch velocity data. After implementing exclusion criteria, performance data were analyzed from 28 of the 41 pitchers over a minimum of 4 MLB seasons for each player. A pair-matched control group of pitchers who did not have a known UCL injury were analyzed for comparison. Of the initial 41 players, 3 were excluded for revision UCL reconstruction. Eight of the 38 players who underwent primary UCL reconstruction did not return to pitching at the major league level, and 2 players who met the exclusion criteria were omitted, leaving data on 28 players available for final velocity analysis. The mean percentage change in the velocity of pitches thrown by players who underwent UCL reconstruction was not significantly different compared with that of players in the control group. The mean innings pitched was statistically different only for the year of injury and the first postinjury year. There were also no statistically significant differences between the 2 groups with regard to commonly used statistical performance measurements, including earned run average, batting average against, walks per 9 innings, strikeouts per 9 innings, and walks plus hits per inning pitched. There were no significant differences in pitch velocity and common performance measurements between players who returned to MLB after UCL reconstruction and pair-matched controls.

  5. Autograft Versus Allograft Anterior Cruciate Ligament Reconstruction: A Prospective, Randomized Clinical Study With a Minimum 10-Year Follow-up.

    PubMed

    Bottoni, Craig R; Smith, Eric L; Shaha, James; Shaha, Steven S; Raybin, Sarah G; Tokish, John M; Rowles, Douglas J

    2015-10-01

    The use of allografts for anterior cruciate ligament (ACL) reconstruction in young athletes is controversial. No long-term results have been published comparing tibialis posterior allografts to hamstring autografts. To evaluate the long-term results of primary ACL reconstruction using either an allograft or autograft. Randomized controlled trial; Level of evidence, 1. From June 2002 to August 2003, patients with a symptomatic ACL-deficient knee were randomized to receive either a hamstring autograft or tibialis posterior allograft. All allografts were from a single tissue bank, aseptically processed, and fresh-frozen without terminal irradiation. Graft fixation was identical in all knees. All patients followed the same postoperative rehabilitation protocol, which was blinded to the therapists. Preoperative and postoperative assessments were performed via examination and/or telephone and Internet-based questionnaire to ascertain the functional and subjective status using established knee metrics. The primary outcome measures were graft integrity, subjective knee stability, and functional status. There were 99 patients (100 knees); 86 were men, and 95% were active-duty military. Both groups were similar in demographics and preoperative activity level. The mean and median ages of both groups were identical at 29 and 26 years, respectively. Concomitant meniscal and chondral pathologic abnormalities, microfracture, and meniscal repair performed at the time of reconstruction were similar in both groups. At a minimum of 10 years (range, 120-132 months) from surgery, 96 patients (97 knees) were contacted (2 patients were deceased, and 1 was unable to be located). There were 4 (8.3%) autograft and 13 (26.5%) allograft failures that required revision reconstruction. In the remaining patients whose graft was intact, there was no difference in the mean Single Assessment Numeric Evaluation, Tegner, or International Knee Documentation Committee scores. At a minimum of 10 years after ACL reconstruction in a young athletic population, over 80% of all grafts were intact and had maintained stability. However, those patients who had an allograft failed at a rate over 3 times higher than those with an autograft. © 2015 The Author(s).

  6. Improved parallel image reconstruction using feature refinement.

    PubMed

    Cheng, Jing; Jia, Sen; Ying, Leslie; Liu, Yuanyuan; Wang, Shanshan; Zhu, Yanjie; Li, Ye; Zou, Chao; Liu, Xin; Liang, Dong

    2018-07-01

    The aim of this study was to develop a novel feature refinement MR reconstruction method from highly undersampled multichannel acquisitions for improving the image quality and preserve more detail information. The feature refinement technique, which uses a feature descriptor to pick up useful features from residual image discarded by sparsity constrains, is applied to preserve the details of the image in compressed sensing and parallel imaging in MRI (CS-pMRI). The texture descriptor and structure descriptor recognizing different types of features are required for forming the feature descriptor. Feasibility of the feature refinement was validated using three different multicoil reconstruction methods on in vivo data. Experimental results show that reconstruction methods with feature refinement improve the quality of reconstructed image and restore the image details more accurately than the original methods, which is also verified by the lower values of the root mean square error and high frequency error norm. A simple and effective way to preserve more useful detailed information in CS-pMRI is proposed. This technique can effectively improve the reconstruction quality and has superior performance in terms of detail preservation compared with the original version without feature refinement. Magn Reson Med 80:211-223, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Deep learning methods to guide CT image reconstruction and reduce metal artifacts

    NASA Astrophysics Data System (ADS)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge

    2017-03-01

    The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

  8. 45 CFR 286.205 - How will we determine if a Tribe fails to meet the minimum work participation rate(s)?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...

  9. Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish

    PubMed Central

    Correia, Teresa; Yin, Jun; Ramel, Marie-Christine; Andrews, Natalie; Katan, Matilda; Bugeon, Laurence; Dallman, Margaret J.; McGinty, James; Frankel, Paul; French, Paul M. W.; Arridge, Simon

    2015-01-01

    Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds. PMID:26308086

  10. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    NASA Astrophysics Data System (ADS)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  11. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  12. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  13. Robust video super-resolution with registration efficiency adaptation

    NASA Astrophysics Data System (ADS)

    Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen

    2010-07-01

    Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.

  14. Fault and Error Latency Under Real Workload: an Experimental Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram

    1986-01-01

    A practical methodology for the study of fault and error latency is demonstrated under a real workload. This is the first study that measures and quantifies the latency under real workload and fills a major gap in the current understanding of workload-failure relationships. The methodology is based on low level data gathered on a VAX 11/780 during the normal workload conditions of the installation. Fault occurrence is simulated on the data, and the error generation and discovery process is reconstructed to determine latency. The analysis proceeds to combine the low level activity data with high level machine performance data to yield a better understanding of the phenomena. A strong relationship exists between latency and workload and that relationship is quantified. The sampling and reconstruction techniques used are also validated. Error latency in the memory where the operating system resides was studied using data on the physical memory access. Fault latency in the paged section of memory was determined using data from physical memory scans. Error latency in the microcontrol store was studied using data on the microcode access and usage.

  15. Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins

    PubMed Central

    Nguyen, Bao Linh; Pettitt, B. Montgomery

    2015-01-01

    The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations. PMID:26388706

  16. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  18. Experimental evaluation and basis function optimization of the spatially variant image-space PSF on the Ingenuity PET/MR scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva

    2014-06-15

    Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less

  19. Impact of joint statistical dual-energy CT reconstruction of proton stopping power images: Comparison to image- and sinogram-domain material decomposition approaches.

    PubMed

    Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2018-05-01

    The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.

  20. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  1. WE-DE-BRA-09: Fast Megavoltage CT Imaging with Rapid Scan Time and Low Imaging Dose in Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magome, T; University of Tokyo Hospital, Tokyo; University of Minnesota, Minneapolis, MN

    Purpose: Megavoltage computed tomography (MVCT) imaging has been widely used for daily patient setup with helical tomotherapy (HT). One drawback of MVCT is its very long imaging time, owing to slow couch speed. The purpose of this study was to develop an MVCT imaging method allowing faster couch speeds, and to assess its accuracy for image guidance for HT. Methods: Three cadavers (mimicking closest physiological and physical system of patients) were scanned four times with couch speeds of 1, 2, 3, and 4 mm/s. The resulting MVCT images were reconstructed using an iterative reconstruction (IR) algorithm. The MVCT images weremore » registered with kilovoltage CT images, and the registration errors were compared with the errors with conventional filtered back projection (FBP) algorithm. Moreover, the fast MVCT imaging was tested in three cases of total marrow irradiation as a clinical trial. Results: Three-dimensional registration errors of the MVCT images reconstructed with the IR algorithm were significantly smaller (p < 0.05) than the errors of images reconstructed with the FBP algorithm at fast couch speeds (3, 4 mm/s). The scan time and imaging dose at a speed of 4 mm/s were reduced to 30% of those from a conventional coarse mode scan. For the patient imaging, a limited number of conventional MVCT (1.2 mm/s) and fast MVCT (3 mm/s) reveals acceptable reduced imaging time and dose able to use for anatomical registration. Conclusion: Fast MVCT with IR algorithm maybe clinically feasible alternative for rapid 3D patient localization. This technique may also be useful for calculating daily dose distributions or organ motion analyses in HT treatment over a wide area.« less

  2. Quantitative metrics for evaluating parallel acquisition techniques in diffusion tensor imaging at 3 Tesla.

    PubMed

    Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha

    2006-11-01

    Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.

  3. Automatic pedicles detection using convolutional neural network in a 3D spine reconstruction from biplanar radiographs

    NASA Astrophysics Data System (ADS)

    Bakhous, Christine; Aubert, Benjamin; Vazquez, Carlos; Cresson, Thierry; Parent, Stefan; De Guise, Jacques

    2018-02-01

    The 3D analysis of the spine deformities (scoliosis) has a high potential in its clinical diagnosis and treatment. In a biplanar radiographs context, a 3D analysis requires a 3D reconstruction from a pair of 2D X-rays. Whether being fully-/semiautomatic or manual, this task is complex because of the noise, the structure superimposition and partial information due to a limited projections number. Being involved in the axial vertebra rotation (AVR), which is a fundamental clinical parameter for scoliosis diagnosis, pedicles are important landmarks for the 3D spine modeling and pre-operative planning. In this paper, we focus on the extension of a fully-automatic 3D spine reconstruction method where the Vertebral Body Centers (VBCs) are automatically detected using Convolutional Neural Network (CNN) and then regularized using a Statistical Shape Model (SSM) framework. In this global process, pedicles are inferred statistically during the SSM regularization. Our contribution is to add a CNN-based regression model for pedicle detection allowing a better pedicle localization and improving the clinical parameters estimation (e.g. AVR, Cobb angle). Having 476 datasets including healthy patients and Adolescent Idiopathic Scoliosis (AIS) cases with different scoliosis grades (Cobb angles up to 116°), we used 380 for training, 48 for testing and 48 for validation. Adding the local CNN-based pedicle detection decreases the mean absolute error of the AVR by 10%. The 3D mean Euclidian distance error between detected pedicles and ground truth decreases by 17% and the maximum error by 19%. Moreover, a general improvement is observed in the 3D spine reconstruction and reflected in lower errors on the Cobb angle estimation.

  4. Anatomical reconstructions of pediatric airways from endoscopic images: a pilot study of the accuracy of quantitative endoscopy.

    PubMed

    Meisner, Eric M; Hager, Gregory D; Ishman, Stacey L; Brown, David; Tunkel, David E; Ishii, Masaru

    2013-11-01

    To evaluate the accuracy of three-dimensional (3D) airway reconstructions obtained using quantitative endoscopy (QE). We developed this novel technique to reconstruct precise 3D representations of airway geometries from endoscopic video streams. This method, based on machine vision methodologies, uses a post-processing step of the standard videos obtained during routine laryngoscopy and bronchoscopy. We hypothesize that this method is precise and will generate assessment of airway size and shape similar to those obtained using computed tomography (CT). This study was approved by the institutional review board (IRB). We analyzed video sequences from pediatric patients receiving rigid bronchoscopy. We generated 3D scaled airway models of the subglottis, trachea, and carina using QE. These models were compared to 3D airway models generated from CT. We used the CT data as the gold standard measure of airway size, and used a mixed linear model to estimate the average error in cross-sectional area and effective diameter for QE. The average error in cross sectional area (area sliced perpendicular to the long axis of the airway) was 7.7 mm(2) (variance 33.447 mm(4)). The average error in effective diameter was 0.38775 mm (variance 2.45 mm(2)), approximately 9% error. Our pilot study suggests that QE can be used to generate precise 3D reconstructions of airways. This technique is atraumatic, does not require ionizing radiation, and integrates easily into standard airway assessment protocols. We conjecture that this technology will be useful for staging airway disease and assessing surgical outcomes. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  5. Algorithms to eliminate the influence of non-uniform intensity distributions on wavefront reconstruction by quadri-wave lateral shearing interferometers

    NASA Astrophysics Data System (ADS)

    Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing

    2017-11-01

    In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.

  6. NCDOT : bridge policy

    DOT National Transportation Integrated Search

    1994-11-01

    NCDOTs Bridge Policy establishes controlling design elements for new and reconstructed bridges on the state road system. It includes information to address sidewalks and bicycle facilities on bridges, including minimum handrail heights and sidewal...

  7. Sunspot Observations During the Maunder Minimum from the Correspondence of John Flamsteed

    NASA Astrophysics Data System (ADS)

    Carrasco, V. M. S.; Vaquero, J. M.

    2016-11-01

    We compile and analyze the sunspot observations made by John Flamsteed for the period 1672 - 1703, which corresponds to the second part of the Maunder Minimum. They appear in the correspondence of the famous astronomer. We include in an appendix the original texts of the sunspot records kept by Flamsteed. We compute an estimate of the level of solar activity using these records, and compare the results with the latest reconstructions of solar activity during the Maunder Minimum, obtaining values characteristic of a grand solar minimum. Finally, we discuss a phenomenon observed and described by Stephen Gray in 1705 that has been interpreted as a white-light flare.

  8. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    NASA Astrophysics Data System (ADS)

    Rota Kops, Elena; Herzog, Hans

    2013-02-01

    AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal cavity yielded an overestimation in cerebellum up to 5%. ConclusionsThe present error analysis confirms that our template-based attenuation method provides reliable attenuation corrections of PET brain imaging measured in PET/MR scanners.

  9. Prior-knowledge Fitting of Accelerated Five-dimensional Echo Planar J-resolved Spectroscopic Imaging: Effect of Nonlinear Reconstruction on Quantitation.

    PubMed

    Iqbal, Zohaib; Wilson, Neil E; Thomas, M Albert

    2017-07-24

    1 H Magnetic Resonance Spectroscopic imaging (SI) is a powerful tool capable of investigating metabolism in vivo from mul- tiple regions. However, SI techniques are time consuming, and are therefore difficult to implement clinically. By applying non-uniform sampling (NUS) and compressed sensing (CS) reconstruction, it is possible to accelerate these scans while re- taining key spectral information. One recently developed method that utilizes this type of acceleration is the five-dimensional echo planar J-resolved spectroscopic imaging (5D EP-JRESI) sequence, which is capable of obtaining two-dimensional (2D) spectra from three spatial dimensions. The prior-knowledge fitting (ProFit) algorithm is typically used to quantify 2D spectra in vivo, however the effects of NUS and CS reconstruction on the quantitation results are unknown. This study utilized a simulated brain phantom to investigate the errors introduced through the acceleration methods. Errors (normalized root mean square error >15%) were found between metabolite concentrations after twelve-fold acceleration for several low concentra- tion (<2 mM) metabolites. The Cramér Rao lower bound% (CRLB%) values, which are typically used for quality control, were not reflective of the increased quantitation error arising from acceleration. Finally, occipital white (OWM) and gray (OGM) human brain matter were quantified in vivo using the 5D EP-JRESI sequence with eight-fold acceleration.

  10. Correction of phase-shifting error in wavelength scanning digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Wang, Jie; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-05-01

    Digital holographic microscopy is a promising method for measuring complex micro-structures with high slopes. A quasi-common path interferometric apparatus is adopted to overcome environmental disturbances, and an acousto-optic tunable filter is used to obtain multi-wavelength holograms. However, the phase shifting error caused by the acousto-optic tunable filter reduces the measurement accuracy and, in turn, the reconstructed topographies are erroneous. In this paper, an accurate reconstruction approach is proposed. It corrects the phase-shifting errors by minimizing the difference between the ideal interferograms and the recorded ones. The restriction on the step number and uniformity of the phase shifting is relaxed in the interferometry, and the measurement accuracy for complex surfaces can also be improved. The universality and superiority of the proposed method are demonstrated by practical experiments and comparison to other measurement methods.

  11. Study of run time errors of the ATLAS pixel detector in the 2012 data taking period

    NASA Astrophysics Data System (ADS)

    Gandrajula, Reddy Pratap

    The high resolution silicon Pixel detector is critical in event vertex reconstruction and in particle track reconstruction in the ATLAS detector. During the pixel data taking operation, some modules (Silicon Pixel sensor +Front End Chip+ Module Control Chip (MCC)) go to an auto-disable state, where the Modules don't send the data for storage. Modules become operational again after reconfiguration. The source of the problem is not fully understood. One possible source of the problem is traced to the occurrence of single event upset (SEU) in the MCC. Such a module goes to either a Timeout or Busy state. This report is the study of different types and rates of errors occurring in the Pixel data taking operation. Also, the study includes the error rate dependency on Pixel detector geometry.

  12. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Inviscid Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.

  13. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆

    PubMed Central

    López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874

  14. Accurate low-dose iterative CT reconstruction from few projections by Generalized Anisotropic Total Variation minimization for industrial CT.

    PubMed

    Debatin, Maurice; Hesser, Jürgen

    2015-01-01

    Reducing the amount of time for data acquisition and reconstruction in industrial CT decreases the operation time of the X-ray machine and therefore increases the sales. This can be achieved by reducing both, the dose and the pulse length of the CT system and the number of projections for the reconstruction, respectively. In this paper, a novel generalized Anisotropic Total Variation regularization for under-sampled, low-dose iterative CT reconstruction is discussed and compared to the standard methods, Total Variation, Adaptive weighted Total Variation and Filtered Backprojection. The novel regularization function uses a priori information about the Gradient Magnitude Distribution of the scanned object for the reconstruction. We provide a general parameterization scheme and evaluate the efficiency of our new algorithm for different noise levels and different number of projection views. When noise is not present, error-free reconstructions are achievable for AwTV and GATV from 40 projections. In cases where noise is simulated, our strategy achieves a Relative Root Mean Square Error that is up to 11 times lower than Total Variation-based and up to 4 times lower than AwTV-based iterative statistical reconstruction (e.g. for a SNR of 223 and 40 projections). To obtain the same reconstruction quality as achieved by Total Variation, the projection number and the pulse length, and the acquisition time and the dose respectively can be reduced by a factor of approximately 3.5, when AwTV is used and a factor of approximately 6.7, when our proposed algorithm is used.

  15. Virtual reconstruction of very large skull defects featuring partly and completely missing midsagittal planes.

    PubMed

    Senck, Sascha; Coquerelle, Michael; Weber, Gerhard W; Benazzi, Stefano

    2013-05-01

    Despite the development of computer-based methods, cranial reconstruction of very large skull defects remains a challenge particularly if the damage affects the midsagittal region hampering the usage of mirror imaging techniques. This pilot study aims to deliver a new method that goes beyond mirror imaging, giving the possibility to reconstruct crania characterized by large missing areas, which might be useful in the fields of paleoanthropology, bioarcheology, and forensics. We test the accuracy of digital reconstructions in cases where two-thirds or more of a human cranium were missing. A three-dimensional (3D) virtual model of a human cranium was virtually damaged twice to compare two destruction-reconstruction scenarios. In the first case, a small fraction of the midsagittal region was still preserved, allowing the application of mirror imaging techniques. In the second case, the damage affected the complete midsagittal region, which demands a new approach to estimate the position of the midsagittal plane. Reconstructions were carried out using CT scans from a sample of modern humans (12 males and 13 females), to which 3D digital modeling techniques and geometric morphometric methods were applied. As expected, the second simulation showed a larger variability than the first one, which underlines the fact that the individual midsagittal plane is of course preferable in order to minimize the reconstruction error. However, in both simulations the Procrustes mean shape was an effective reference for the reconstruction of the entire cranium, producing models that showed a remarkably low error of about 3 mm, given the extent of missing data. Copyright © 2013 Wiley Periodicals, Inc.

  16. WE-D-BRA-04: Online 3D EPID-Based Dose Verification for Optimum Patient Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spreeuw, H; Rozendaal, R; Olaciregui-Ruiz, I

    2015-06-15

    Purpose: To develop an online 3D dose verification tool based on EPID transit dosimetry to ensure optimum patient safety in radiotherapy treatments. Methods: A new software package was developed which processes EPID portal images online using a back-projection algorithm for the 3D dose reconstruction. The package processes portal images faster than the acquisition rate of the portal imager (∼ 2.5 fps). After a portal image is acquired, the software seeks for “hot spots” in the reconstructed 3D dose distribution. A hot spot is in this study defined as a 4 cm{sup 3} cube where the average cumulative reconstructed dose exceedsmore » the average total planned dose by at least 20% and 50 cGy. If a hot spot is detected, an alert is generated resulting in a linac halt. The software has been tested by irradiating an Alderson phantom after introducing various types of serious delivery errors. Results: In our first experiment the Alderson phantom was irradiated with two arcs from a 6 MV VMAT H&N treatment having a large leaf position error or a large monitor unit error. For both arcs and both errors the linac was halted before dose delivery was completed. When no error was introduced, the linac was not halted. The complete processing of a single portal frame, including hot spot detection, takes about 220 ms on a dual hexacore Intel Xeon 25 X5650 CPU at 2.66 GHz. Conclusion: A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for various kinds of gross delivery errors. The detection of hot spots was proven to be effective for the timely detection of these errors. Current work is focused on hot spot detection criteria for various treatment sites and the introduction of a clinical pilot program with online verification of hypo-fractionated (lung) treatments.« less

  17. A method to reconstruct long precipitation series using systematic descriptive observations in weather diaries: the example of the precipitation series for Bern, Switzerland (1760-2003)

    NASA Astrophysics Data System (ADS)

    Gimmi, U.; Luterbacher, J.; Pfister, C.; Wanner, H.

    2007-01-01

    In contrast to barometric and thermometric records, early instrumental precipitation series are quite rare. Based on systematic descriptive daily records, a quantitative monthly precipitation series for Bern (Switzerland) was reconstructed back to the year 1760 (reconstruction based on documentary evidence). Since every observer had his own personal style to fill out his diary, the main focus was to avoid observer-specific bias in the reconstruction. An independent statistical monthly precipitation reconstruction was performed using instrumental data from European sites. Over most periods the reconstruction based on documentary evidence lies inside the 2 standard errors of the statistical estimates. The comparison between these two approaches enables an independent verification and a reliable error estimate. The analysis points to below normal rainfall totals in all seasons during the late 18th century and in the 1820s and 1830s. Increased precipitation occurred in the early 1850s and the late 1870s, particularly from spring to autumn. The annual precipitation totals generally tend to be higher in the 20th century than in the late 18th and 19th century. Precipitation changes are discussed in the context of socioeconomic impacts and Alpine glacier dynamics. The conceptual design of the reconstruction procedure is aimed at application for similar descriptive precipitation series, which are known to be abundant from the mid-18th century in Europe and the U.S.

  18. Self-prior strategy for organ reconstruction in fluorescence molecular tomography

    PubMed Central

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-01-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094

  19. Self-prior strategy for organ reconstruction in fluorescence molecular tomography.

    PubMed

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-10-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.

  20. Ensemble Kalman filter for the reconstruction of the Earth's mantle circulation

    NASA Astrophysics Data System (ADS)

    Bocher, Marie; Fournier, Alexandre; Coltice, Nicolas

    2018-02-01

    Recent advances in mantle convection modeling led to the release of a new generation of convection codes, able to self-consistently generate plate-like tectonics at their surface. Those models physically link mantle dynamics to surface tectonics. Combined with plate tectonic reconstructions, they have the potential to produce a new generation of mantle circulation models that use data assimilation methods and where uncertainties in plate tectonic reconstructions are taken into account. We provided a proof of this concept by applying a suboptimal Kalman filter to the reconstruction of mantle circulation (Bocher et al., 2016). Here, we propose to go one step further and apply the ensemble Kalman filter (EnKF) to this problem. The EnKF is a sequential Monte Carlo method particularly adapted to solve high-dimensional data assimilation problems with nonlinear dynamics. We tested the EnKF using synthetic observations consisting of surface velocity and heat flow measurements on a 2-D-spherical annulus model and compared it with the method developed previously. The EnKF performs on average better and is more stable than the former method. Less than 300 ensemble members are sufficient to reconstruct an evolution. We use covariance adaptive inflation and localization to correct for sampling errors. We show that the EnKF results are robust over a wide range of covariance localization parameters. The reconstruction is associated with an estimation of the error, and provides valuable information on where the reconstruction is to be trusted or not.

  1. Tomographic reconstruction of tracer gas concentration profiles in a room with the use of a single OP-FTIR and two iterative algorithms: ART and PWLS.

    PubMed

    Park, D Y; Fessler, J A; Yost, M G; Levine, S P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 x 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  2. Tomographic Reconstruction of Tracer Gas Concentration Profiles in a Room with the Use of a Single OP-FTIR and Two Iterative Algorithms: ART and PWLS.

    PubMed

    Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L; Han, Y; Jin, M

    Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater thanmore » an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS.« less

  4. First performance evaluation of software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine at CT.

    PubMed

    Scholtz, Jan-Erik; Wichmann, Julian L; Kaup, Moritz; Fischer, Sebastian; Kerl, J Matthias; Lehnert, Thomas; Vogl, Thomas J; Bauer, Ralf W

    2015-03-01

    To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. 77 patients (28 women, 49 men, mean age 65.3±14.4 years) with known or suspected spinal disorders (degenerative spine disease n=32; disc herniation n=36; traumatic vertebral fractures n=9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p<0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p<0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time-saving when reconstructions of 2 and more vertebrae are performed. Checking results of automatic labeling is necessary to prevent errors in labeling. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors

    PubMed Central

    Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca

    2012-01-01

    Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642

  6. Optimized multiple linear mappings for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  7. Serial position effects in semantic memory: reconstructing the order of verses of hymns.

    PubMed

    Maylor, Elizabeth A

    2002-12-01

    Serial position effects (primacy and recency) have been consistently demonstrated in both short- and long-term episodic memory tasks. The search for corresponding effects in semantic memory tasks (e.g., reconstructing the order of U.S. presidents) has been confounded by factors such as differential exposure to stimuli. In the present study, the stimuli were six-verse hymns that would have been sung from the first to the last verse by churchgoers on numerous occasions. Participants were presented with the verses of each hymn in random order and were required to reconstruct the correct order. Primacy and recency effects were significantly more evident for churchgoers than for nonchurchgoers. Moreover, error gradients were steeper than chance for churchgoers but not for nonchurchgoers; in other words, churchgoers' errors were more likely to be close to the correct position than further away. These findings provide the first unequivocal demonstration of serial position effects in semantic memory.

  8. Usefulness of model-based iterative reconstruction in semi-automatic volumetry for ground-glass nodules at ultra-low-dose CT: a phantom study.

    PubMed

    Maruyama, Shuki; Fukushima, Yasuhiro; Miyamae, Yuta; Koizumi, Koji

    2018-06-01

    This study aimed to investigate the effects of parameter presets of the forward projected model-based iterative reconstruction solution (FIRST) on the accuracy of pulmonary nodule volume measurement. A torso phantom with simulated nodules [diameter: 5, 8, 10, and 12 mm; computed tomography (CT) density: - 630 HU] was scanned with a multi-detector CT at tube currents of 10 mA (ultra-low-dose: UL-dose) and 270 mA (standard-dose: Std-dose). Images were reconstructed with filtered back projection [FBP; standard (Std-FBP), ultra-low-dose (UL-FBP)], FIRST Lung (UL-Lung), and FIRST Body (UL-Body), and analyzed with a semi-automatic software. The error in the volume measurement was determined. The errors with UL-Lung and UL-Body were smaller than that with UL-FBP. The smallest error was 5.8% ± 0.3 for the 12-mm nodule with UL-Body (middle lung). Our results indicated that FIRST Body would be superior to FIRST Lung in terms of accuracy of nodule measurement with UL-dose CT.

  9. Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET

    NASA Astrophysics Data System (ADS)

    Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan

    2013-06-01

    TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.

  10. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  11. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    PubMed

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Temporal change and its spatial variety on land surface temperature and land use changes in the Red River Delta, Vietnam, using MODIS time-series imagery.

    PubMed

    Van Nguyen, On; Kawamura, Kensuke; Trong, Dung Phan; Gong, Zhe; Suwandana, Endan

    2015-07-01

    Temporal changes in the land surface temperature (LST) in urbanization areas are important for studying an urban heat island (UHI) and regional climate change. This study examined the LST trends under different land use categories in the Red River Delta, Vietnam, using the Moderate Resolution Imaging Spectroradiometer (MODIS) LST product (MOD11A2) and land cover type product (MCD12Q1) for 11 years (2002-2012). Smoothened time-series MODIS LST data were reconstructed by the Harmonic Analysis of Time Series (HANTS) algorithm. The reconstructed LST (maximum and minimum temperatures) was assessed using the hourly air temperature dataset in two land-based meteorological stations provided by the National Climatic Data Center (NCDC). Significant correlation was obtained between MODIS LST and the air temperature for the daytime (R (2) = 0.73, root mean square error [RMSE] = 1.66 °C) and night time (R (2) = 0.84, RMSE = 1.79 °C). Statistical analysis also showed that LST trends vary strongly depending on the land cover type. Forest, wetland, and cropland had a slight tendency to decline, whereas cropland and urban had sharper increases. In urbanized areas, these increasing trends are even more obvious. This is undeniable evidence of the negative impact of urbanization on a surface urban heat island (SUHI) and global warming.

  13. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data*

    PubMed Central

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-01-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T a) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T a estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T a based on MODIS land surface temperature (LST) data. The verification results of maximum T a, minimum T a, GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001–2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale. PMID:23365013

  14. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data.

    PubMed

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-02-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T(a)) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T(a) estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T(a) based on MODIS land surface temperature (LST) data. The verification results of maximum T(a), minimum T(a), GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001-2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale.

  15. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  16. Isobaric Reconstruction of the Baryonic Acoustic Oscillation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li

    2017-06-01

    In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.

  17. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  18. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  19. Efficient parallel reconstruction for high resolution multishot spiral diffusion data with low rank constraint.

    PubMed

    Liao, Congyu; Chen, Ying; Cao, Xiaozhi; Chen, Song; He, Hongjian; Mani, Merry; Jacob, Mathews; Magnotta, Vincent; Zhong, Jianhui

    2017-03-01

    To propose a novel reconstruction method using parallel imaging with low rank constraint to accelerate high resolution multishot spiral diffusion imaging. The undersampled high resolution diffusion data were reconstructed based on a low rank (LR) constraint using similarities between the data of different interleaves from a multishot spiral acquisition. The self-navigated phase compensation using the low resolution phase data in the center of k-space was applied to correct shot-to-shot phase variations induced by motion artifacts. The low rank reconstruction was combined with sensitivity encoding (SENSE) for further acceleration. The efficiency of the proposed joint reconstruction framework, dubbed LR-SENSE, was evaluated through error quantifications and compared with ℓ1 regularized compressed sensing method and conventional iterative SENSE method using the same datasets. It was shown that with a same acceleration factor, the proposed LR-SENSE method had the smallest normalized sum-of-squares errors among all the compared methods in all diffusion weighted images and DTI-derived index maps, when evaluated with different acceleration factors (R = 2, 3, 4) and for all the acquired diffusion directions. Robust high resolution diffusion weighted image can be efficiently reconstructed from highly undersampled multishot spiral data with the proposed LR-SENSE method. Magn Reson Med 77:1359-1366, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Automatic 3D reconstruction of electrophysiology catheters from two-view monoplane C-arm image sequences.

    PubMed

    Baur, Christoph; Milletari, Fausto; Belagiannis, Vasileios; Navab, Nassir; Fallavollita, Pascal

    2016-07-01

    Catheter guidance is a vital task for the success of electrophysiology interventions. It is usually provided through fluoroscopic images that are taken intra-operatively. The cardiologists, who are typically equipped with C-arm systems, scan the patient from multiple views rotating the fluoroscope around one of its axes. The resulting sequences allow the cardiologists to build a mental model of the 3D position of the catheters and interest points from the multiple views. We describe and compare different 3D catheter reconstruction strategies and ultimately propose a novel and robust method for the automatic reconstruction of 3D catheters in non-synchronized fluoroscopic sequences. This approach does not purely rely on triangulation but incorporates prior knowledge about the catheters. In conjunction with an automatic detection method, we demonstrate the performance of our method compared to ground truth annotations. In our experiments that include 20 biplane datasets, we achieve an average reprojection error of 0.43 mm and an average reconstruction error of 0.67 mm compared to gold standard annotation. In clinical practice, catheters suffer from complex motion due to the combined effect of heartbeat and respiratory motion. As a result, any 3D reconstruction algorithm via triangulation is imprecise. We have proposed a new method that is fully automatic and highly accurate to reconstruct catheters in three dimensions.

  1. Real-time implementing wavefront reconstruction for adaptive optics

    NASA Astrophysics Data System (ADS)

    Wang, Caixia; Li, Mei; Wang, Chunhong; Zhou, Luchun; Jiang, Wenhan

    2004-12-01

    The capability of real time wave-front reconstruction is important for an adaptive optics (AO) system. The bandwidth of system and the real-time processing ability of the wave-front processor is mainly affected by the speed of calculation. The system requires enough number of subapertures and high sampling frequency to compensate atmospheric turbulence. The number of reconstruction operation is increased accordingly. Since the performance of AO system improves with the decrease of calculation latency, it is necessary to study how to increase the speed of wavefront reconstruction. There are two methods to improve the real time of the reconstruction. One is to convert the wavefront reconstruction matrix, such as by wavelet or FFT. The other is enhancing the performance of the processing element. Analysis shows that the latency cutting is performed with the cost of reconstruction precision by the former method. In this article, the latter method is adopted. From the characteristic of the wavefront reconstruction algorithm, a systolic array by FPGA is properly designed to implement real-time wavefront reconstruction. The system delay is reduced greatly by the utilization of pipeline and parallel processing. The minimum latency of reconstruction is the reconstruction calculation of one subaperture.

  2. Surgical outcomes and nipple projection using the modified skate flap for nipple-areolar reconstruction in a series of 422 implant reconstructions.

    PubMed

    Zhong, Toni; Antony, Anu; Cordeiro, Peter

    2009-05-01

    Numerous techniques have been used in an attempt to achieve long-term nipple projection following nipple-areolar reconstruction (NAR). A common setback, however, is the diminution of projection over time; this phenomenon is particularly evident following implant based breast reconstruction. The purpose of this report was thus to evaluate surgical outcomes and long-term nipple projection with the use of "modified skate flap" technique in exclusively implant based postmastectomy reconstructions. A retrospective review was performed for the period between 1993 and 2007. All consecutive patients with 2-staged tissue expander/implant reconstructions followed by NAR using the modified skate flap technique performed by the senior author (P.C.) were identified in a prospectively maintained breast reconstruction database. Only patients with a minimum of 1-year follow-up were included in the study. Patients with a history of irradiation to the breast were excluded from nipple projection assessment. Clinical outcome measurements included long-term nipple projection as well as incidence of complications from the NAR procedure using the modified skate flap technique. Over the 15-year study period, 475 patients underwent 2-staged tissue expander/implant reconstruction followed by NAR using the modified skate flap technique. Of these, there was a total of 292 patients with the minimum requirement of 1-year follow-up post NAR (61% follow-up rate). The total number of reconstructed nipple areolar complexes evaluated in this series was 422 (130 bilateral and 162 unilateral NAR). Forty patients (28 unilateral and 12 bilateral NAR) who received radiation to their breasts were excluded from nipple projection assessment. At a median follow-up of 44 months (range: 12-84 months), mean nipple projection was 2.5 mm (range: 1-4 mm). Minor complications occurred in 7.2% of the patients (n = 292). Skin graft donor site dehiscence was the most common complication (3.1%) followed by partial skin graft nontake of the areola (2.1%). This report documents the largest series of NAR using a single technique in the setting of postmastectomy reconstructions. This technique can be safely performed over breast implants with acceptably low rates of complications and predictable results. Long-term nipple projection over implant reconstructions using this technique is modest and this must be forewarned to patients completing the final stage of their implant reconstruction.

  3. The Gulliver Effect: The Impact of Error in an Elephantine Subpopulation on Estimates for Lilliputian Subpopulations

    ERIC Educational Resources Information Center

    Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene

    2009-01-01

    An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…

  4. Understanding reliability and some limitations of the images and spectra reconstructed from a multi-monochromatic x-ray imager

    DOE PAGES

    Nagayama, T.; Mancini, R. C.; Mayes, D.; ...

    2015-11-18

    Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. In this paper, we synthetically quantify the accuracymore » of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ~6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ~10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. Finally, it is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less

  5. A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows

    DOE PAGES

    Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...

    2015-03-11

    High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less

  6. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  7. Understanding reliability and some limitations of the images and spectra reconstructed from a multi-monochromatic x-ray imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagayama, T.; Mancini, R. C.; Mayes, D.

    2015-11-15

    Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of imagesmore » and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less

  8. Understanding reliability and some limitations of the images and spectra reconstructed from a multi-monochromatic x-ray imager.

    PubMed

    Nagayama, T; Mancini, R C; Mayes, D; Tommasini, R; Florido, R

    2015-11-01

    Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.

  9. Blob-enhanced reconstruction technique

    NASA Astrophysics Data System (ADS)

    Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso

    2016-09-01

    A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the signal-to-noise ratio in the reconstructed flow field and a higher value of the correlation factor in the velocity measurements with respect to the volume to which the particles are not replaced.

  10. Analyse des erreurs et grammaire generative: La syntaxe de l'interrogation en francais (Error Analysis and Generative Grammar: The Syntax of Interrogation in French).

    ERIC Educational Resources Information Center

    Py, Bernard

    A progress report is presented of a study which applies a system of generative grammar to error analysis. The objective of the study was to reconstruct the grammar of students' interlanguage, using a systematic analysis of errors. (Interlanguage refers to the linguistic competence of a student who possesses a relatively systematic body of rules,…

  11. System calibration method for Fourier ptychographic microscopy.

    PubMed

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  13. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE PAGES

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    2017-10-28

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  14. Primordial power spectrum: a complete analysis with the WMAP nine-year data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazra, Dhiraj Kumar; Shafieloo, Arman; Souradeep, Tarun, E-mail: dhiraj@apctp.org, E-mail: arman@apctp.org, E-mail: tarun@iucaa.ernet.in

    2013-07-01

    We have improved further the error sensitive Richardson-Lucy deconvolution algorithm making it applicable directly on the un-binned measured angular power spectrum of Cosmic Microwave Background observations to reconstruct the form of the primordial power spectrum. This improvement makes the application of the method significantly more straight forward by removing some intermediate stages of analysis allowing a reconstruction of the primordial spectrum with higher efficiency and precision and with lower computational expenses. Applying the modified algorithm we fit the WMAP 9 year data using the optimized reconstructed form of the primordial spectrum with more than 300 improvement in χ{sup 2}{sub eff}more » with respect to the best fit power-law. This is clearly beyond the reach of other alternative approaches and reflects the efficiency of the proposed method in the reconstruction process and allow us to look for any possible feature in the primordial spectrum projected in the CMB data. Though the proposed method allow us to look at various possibilities for the form of the primordial spectrum, all having good fit to the data, proper error-analysis is needed to test for consistency of theoretical models since, along with possible physical artefacts, most of the features in the reconstructed spectrum might be arising from fitting noises in the CMB data. Reconstructed error-band for the form of the primordial spectrum using many realizations of the data, all bootstrapped and based on WMAP 9 year data, shows proper consistency of power-law form of the primordial spectrum with the WMAP 9 data at all wave numbers. Including WMAP polarization data in to the analysis have not improved much our results due to its low quality but we expect Planck data will allow us to make a full analysis on CMB observations on both temperature and polarization separately and in combination.« less

  15. TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less

  16. Using Perturbative Least Action to Reconstruct Redshift-Space Distortions

    NASA Astrophysics Data System (ADS)

    Goldberg, David M.

    2001-05-01

    In this paper, we present a redshift-space reconstruction scheme that is analogous to and extends the perturbative least action (PLA) method described by Goldberg & Spergel. We first show that this scheme is effective in reconstructing even nonlinear observations. We then suggest that by varying the cosmology to minimize the quadrupole moment of a reconstructed density field, it may be possible to lower the error bars on the redshift distortion parameter, β, as well as to break the degeneracy between the linear bias parameter, b, and ΩM. Finally, we discuss how PLA might be applied to realistic redshift surveys.

  17. Legal consequences of the moral duty to report errors.

    PubMed

    Hall, Jacqulyn Kay

    2003-09-01

    Increasingly, clinicians are under a moral duty to report errors to the patients who are injured by such errors. The sources of this duty are identified, and its probable impact on malpractice litigation and criminal law is discussed. The potential consequences of enforcing this new moral duty as a minimum in law are noted. One predicted consequence is that the trend will be accelerated toward government payment of compensation for errors. The effect of truth-telling on individuals is discussed.

  18. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  19. Time-based Reconstruction of Free-streaming Data in CBM

    NASA Astrophysics Data System (ADS)

    Akishina, Valentina; Kisel, Ivan; Vassiliev, Iouri; Zyzak, Maksym

    2018-02-01

    Traditional latency-limited trigger architectures typical for conventional experiments are inapplicable for the CBM experiment. Instead, CBM will ship and collect time-stamped data into a readout buffer in a form of a time-slice of a certain length and deliver it to a large computer farm, where online event reconstruction and selection will be performed. Grouping measurements into physical collisions must be performed in software and requires reconstruction not only in space, but also in time, the so-called 4-dimensional track reconstruction and event building. The tracks, reconstructed with 4D Cellular Automaton track finder, are combined into event-corresponding clusters according to the estimated time in the target position and the errors, obtained with the Kalman Filter method. The reconstructed events are given as inputs to the KF Particle Finder package for short-lived particle reconstruction. The results of time-based reconstruction of simulated collisions in CBM are presented and discussed in details.

  20. Learning feature representations with a cost-relevant sparse autoencoder.

    PubMed

    Längkvist, Martin; Loutfi, Amy

    2015-02-01

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

  1. A Review of Depth and Normal Fusion Algorithms

    PubMed Central

    Štolc, Svorad; Pock, Thomas

    2018-01-01

    Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903

  2. Correction of electrode modelling errors in multi-frequency EIT imaging.

    PubMed

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  3. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  4. Efficiency of the neighbor-joining method in reconstructing deep and shallow evolutionary relationships in large phylogenies.

    PubMed

    Kumar, S; Gadagkar, S R

    2000-12-01

    The neighbor-joining (NJ) method is widely used in reconstructing large phylogenies because of its computational speed and the high accuracy in phylogenetic inference as revealed in computer simulation studies. However, most computer simulation studies have quantified the overall performance of the NJ method in terms of the percentage of branches inferred correctly or the percentage of replications in which the correct tree is recovered. We have examined other aspects of its performance, such as the relative efficiency in correctly reconstructing shallow (close to the external branches of the tree) and deep branches in large phylogenies; the contribution of zero-length branches to topological errors in the inferred trees; and the influence of increasing the tree size (number of sequences), evolutionary rate, and sequence length on the efficiency of the NJ method. Results show that the correct reconstruction of deep branches is no more difficult than that of shallower branches. The presence of zero-length branches in realized trees contributes significantly to the overall error observed in the NJ tree, especially in large phylogenies or slowly evolving genes. Furthermore, the tree size does not influence the efficiency of NJ in reconstructing shallow and deep branches in our simulation study, in which the evolutionary process is assumed to be homogeneous in all lineages.

  5. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  6. Discriminant WSRC for Large-Scale Plant Species Recognition.

    PubMed

    Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong

    2017-01-01

    In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  7. Designing an Orthotic Insole by Using Kinect® XBOX Gaming Sensor Scanner and Computer Aided Engineering Software

    NASA Astrophysics Data System (ADS)

    Hafiz Burhan, Mohd; Nor, Nik Hisyamudin Muhd; Yarwindran, Mogan; Ibrahim, Mustaffa; Fahrul Hassan, Mohd; Azwir Azlan, Mohd; Turan, Faiz Mohd; Johan, Kartina

    2017-08-01

    Healthcare and medical is one of the most expensive field in the modern world. In order to fulfil medical requirement, this study aimed to design an orthotic insole by using Kinect Xbox Gaming Sensor Scanner and CAE softwares. The accuracy of the Kinect® XBOX 360 gaming sensor is capable of producing 3D reconstructed geometry with the maximum and minimum error of 3.78% (2.78mm) and 1.74% (0.46mm) respectively. The orthotic insole design process had been done by using Autodesk Meshmixer 2.6 and Solidworks 2014 software. Functionality of the orthotic insole designed was capable of reducing foot pressure especially in the metatarsal area. Overall, the proposed method was proved to be highly potential in the design of the insole where it promises low cost, less time consuming, and efficiency in regards that the Kinect® XBOX 360 device promised low price compared to other digital 3D scanner since the software needed to run the device can be downloaded for free.

  8. Data Driven Performance Evaluation of Wireless Sensor Networks

    PubMed Central

    Frery, Alejandro C.; Ramos, Heitor S.; Alencar-Neto, José; Nakamura, Eduardo; Loureiro, Antonio A. F.

    2010-01-01

    Wireless Sensor Networks are presented as devices for signal sampling and reconstruction. Within this framework, the qualitative and quantitative influence of (i) signal granularity, (ii) spatial distribution of sensors, (iii) sensors clustering, and (iv) signal reconstruction procedure are assessed. This is done by defining an error metric and performing a Monte Carlo experiment. It is shown that all these factors have significant impact on the quality of the reconstructed signal. The extent of such impact is quantitatively assessed. PMID:22294920

  9. Minimum constitutive relation error based static identification of beams using force method

    NASA Astrophysics Data System (ADS)

    Guo, Jia; Takewaki, Izuru

    2017-05-01

    A new static identification approach based on the minimum constitutive relation error (CRE) principle for beam structures is introduced. The exact stiffness and the exact bending moment are shown to make the CRE minimal for given displacements to beam damages. A two-step substitution algorithm—a force-method step for the bending moment and a constitutive-relation step for the stiffness—is developed and its convergence is rigorously derived. Identifiability is further discussed and the stiffness in the undeformed region is found to be unidentifiable. An extra set of static measurements is complemented to remedy the drawback. Convergence and robustness are finally verified through numerical examples.

  10. Relation between minimum-error discrimination and optimum unambiguous discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu Daowen; SQIG-Instituto de Telecomunicacoes, Departamento de Matematica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Avenida Rovisco Pais PT-1049-001, Lisbon; Li Lvjun

    2010-09-15

    In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating somemore » states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.« less

  11. Do patients with ewing's sarcoma continue with sports activities after limb salvage surgery of the lower extremity?

    PubMed

    Hobusch, Gerhard Martin; Lang, Nikolaus; Schuh, Reinhard; Windhager, Reinhard; Hofstaetter, Jochen Gerhard

    2015-03-01

    Limb salvage surgery has evolved to become the standard method of treating sarcomas of the extremities with acceptable oncologic results. However, little information exists relative to the activity level or ability to participate in sports after tumor reconstructions. The aims of the study were to answer the following questions: (1) Which sports activity levels and what types of sports can be expected in the long term after tumor reconstruction? (2) Which frequency durations are patients with Ewing's sarcoma able to perform in long-term followup after local control? (3) Do surgical complications affect sports activity level? Thirty patients (13 females, 17 males; mean age, 18 ± 8 years; range, 2-36 years at diagnosis; mean followup 16 ± 6 years [minimum, 5 years]) were included. Tumors were located in the pelvis, femur, tibia, and fibula. Surgical procedures included surgical resections alone (n = 8), surgical resection with biological reconstruction (n = 9), or endoprosthetic reconstruction (n = 13). We assessed UCLA sports activity levels, kinds of sports as well as the frequency per week and the duration of each training unit at long term (minimum followup, 5 years). In long-term followup 83% patients (25 of 30) were performing athletic activity regularly. The hours/week of sports depended on type of surgery and were highest after resections in the pelvis and femur (5.8) and were lowest after megaprosthetic reconstruction of the pelvis (1.0). Patients undergoing biologic reconstructions were able to perform high-impact sports. UCLA sports activity levels were high after joint-preserving vascularized fibula for tibia reconstruction (7.4) and after megaprosthetic reconstruction of the lower extremity (6.3-6.4) and were low after tumors located in the fibula (4.2). Complications during followup did not significantly influence sports activity in long-term survivors. Long-term survivors can achieve high levels of sports activity in many instances. Tumor sites are associated with the postoperative sports activity levels. This information can help surgeons counsel patients in terms of athletic expectations after limb salvage reconstruction for patients with Ewing's sarcoma. Level III, therapeutic study. See Guidelines for Authors for a complete description of levels of evidence.

  12. Climate variability in Andalusia (southern Spain) during the period 1701-1850 based on documentary sources: evaluation and comparison with climate model simulations

    NASA Astrophysics Data System (ADS)

    Rodrigo, F. S.; Gómez-Navarro, J. J.; Montávez Gómez, J. P.

    2012-01-01

    In this work, a reconstruction of climatic conditions in Andalusia (southern Iberian Peninsula) during the period 1701-1850, as well as an evaluation of its associated uncertainties, is presented. This period is interesting because it is characterized by a minimum in solar irradiance (Dalton Minimum, around 1800), as well as intense volcanic activity (for instance, the eruption of Tambora in 1815), at a time when any increase in atmospheric CO2 concentrations was of minor importance. The reconstruction is based on the analysis of a wide variety of documentary data. The reconstruction methodology is based on counting the number of extreme events in the past, and inferring mean value and standard deviation using the assumption of normal distribution for the seasonal means of climate variables. This reconstruction methodology is tested within the pseudoreality of a high-resolution paleoclimate simulation performed with the regional climate model MM5 coupled to the global model ECHO-G. The results show that the reconstructions are influenced by the reference period chosen and the threshold values used to define extreme values. This creates uncertainties which are assessed within the context of climate simulation. An ensemble of reconstructions was obtained using two different reference periods (1885-1915 and 1960-1990) and two pairs of percentiles as threshold values (10-90 and 25-75). The results correspond to winter temperature, and winter, spring and autumn rainfall, and they are compared with simulations of the climate model for the considered period. The mean value of winter temperature for the period 1781-1850 was 10.6 ± 0.1 °C (11.0 °C for the reference period 1960-1990). The mean value of winter rainfall for the period 1701-1850 was 267 ± 18 mm (224 mm for 1960-1990). The mean values of spring and autumn rainfall were 164 ± 11 and 194 ± 16 mm (129 and 162 mm for 1960-1990, respectively). Comparison of the distribution functions corresponding to 1790-1820 and 1960-1990 indicates that during the Dalton Minimum the frequency of dry and warm (wet and cold) winters was lower (higher) than during the reference period: temperatures were up to 0.5 °C lower than the 1960-1990 value, and rainfall was 4% higher.

  13. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    NASA Astrophysics Data System (ADS)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  14. The effect of regularization in motion compensated PET image reconstruction: a realistic numerical 4D simulation study.

    PubMed

    Tsoumpas, C; Polycarpou, I; Thielemans, K; Buerger, C; King, A P; Schaeffter, T; Marsden, P K

    2013-03-21

    Following continuous improvement in PET spatial resolution, respiratory motion correction has become an important task. Two of the most common approaches that utilize all detected PET events to motion-correct PET data are the reconstruct-transform-average method (RTA) and motion-compensated image reconstruction (MCIR). In RTA, separate images are reconstructed for each respiratory frame, subsequently transformed to one reference frame and finally averaged to produce a motion-corrected image. In MCIR, the projection data from all frames are reconstructed by including motion information in the system matrix so that a motion-corrected image is reconstructed directly. Previous theoretical analyses have explained why MCIR is expected to outperform RTA. It has been suggested that MCIR creates less noise than RTA because the images for each separate respiratory frame will be severely affected by noise. However, recent investigations have shown that in the unregularized case RTA images can have fewer noise artefacts, while MCIR images are more quantitatively accurate but have the common salt-and-pepper noise. In this paper, we perform a realistic numerical 4D simulation study to compare the advantages gained by including regularization within reconstruction for RTA and MCIR, in particular using the median-root-prior incorporated in the ordered subsets maximum a posteriori one-step-late algorithm. In this investigation we have demonstrated that MCIR with proper regularization parameters reconstructs lesions with less bias and root mean square error and similar CNR and standard deviation to regularized RTA. This finding is reproducible for a variety of noise levels (25, 50, 100 million counts), lesion sizes (8 mm, 14 mm diameter) and iterations. Nevertheless, regularized RTA can also be a practical solution for motion compensation as a proper level of regularization reduces both bias and mean square error.

  15. The accuracy of tomographic particle image velocimetry for measurements of a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Atkinson, Callum; Coudert, Sebastien; Foucaut, Jean-Marc; Stanislas, Michel; Soria, Julio

    2011-04-01

    To investigate the accuracy of tomographic particle image velocimetry (Tomo-PIV) for turbulent boundary layer measurements, a series of synthetic image-based simulations and practical experiments are performed on a high Reynolds number turbulent boundary layer at Reθ = 7,800. Two different approaches to Tomo-PIV are examined using a full-volume slab measurement and a thin-volume "fat" light sheet approach. Tomographic reconstruction is performed using both the standard MART technique and the more efficient MLOS-SMART approach, showing a 10-time increase in processing speed. Random and bias errors are quantified under the influence of the near-wall velocity gradient, reconstruction method, ghost particles, seeding density and volume thickness, using synthetic images. Experimental Tomo-PIV results are compared with hot-wire measurements and errors are examined in terms of the measured mean and fluctuating profiles, probability density functions of the fluctuations, distributions of fluctuating divergence through the volume and velocity power spectra. Velocity gradients have a large effect on errors near the wall and also increase the errors associated with ghost particles, which convect at mean velocities through the volume thickness. Tomo-PIV provides accurate experimental measurements at low wave numbers; however, reconstruction introduces high noise levels that reduces the effective spatial resolution. A thinner volume is shown to provide a higher measurement accuracy at the expense of the measurement domain, albeit still at a lower effective spatial resolution than planar and Stereo-PIV.

  16. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    NASA Astrophysics Data System (ADS)

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  17. 3D equilibrium reconstruction with islands

    NASA Astrophysics Data System (ADS)

    Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; Shafer, M. W.

    2018-04-01

    This paper presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wall limited L-mode case with an n = 1 error field applied. Flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase. ).

  18. Improved human observer performance in digital reconstructed radiograph verification in head and neck cancer radiotherapy.

    PubMed

    Sturgeon, Jared D; Cox, John A; Mayo, Lauren L; Gunn, G Brandon; Zhang, Lifei; Balter, Peter A; Dong, Lei; Awan, Musaddiq; Kocak-Uzel, Esengul; Mohamed, Abdallah Sherif Radwan; Rosenthal, David I; Fuller, Clifton David

    2015-10-01

    Digitally reconstructed radiographs (DRRs) are routinely used as an a priori reference for setup correction in radiotherapy. The spatial resolution of DRRs may be improved to reduce setup error in fractionated radiotherapy treatment protocols. The influence of finer CT slice thickness reconstruction (STR) and resultant increased resolution DRRs on physician setup accuracy was prospectively evaluated. Four head and neck patient CT-simulation images were acquired and used to create DRR cohorts by varying STRs at 0.5, 1, 2, 2.5, and 3 mm. DRRs were displaced relative to a fixed isocenter using 0-5 mm random shifts in the three cardinal axes. Physician observers reviewed DRRs of varying STRs and displacements and then aligned reference and test DRRs replicating daily KV imaging workflow. A total of 1,064 images were reviewed by four blinded physicians. Observer errors were analyzed using nonparametric statistics (Friedman's test) to determine whether STR cohorts had detectably different displacement profiles. Post hoc bootstrap resampling was applied to evaluate potential generalizability. The observer-based trial revealed a statistically significant difference between cohort means for observer displacement vector error ([Formula: see text]) and for [Formula: see text]-axis [Formula: see text]. Bootstrap analysis suggests a 15% gain in isocenter translational setup error with reduction of STR from 3 mm to [Formula: see text]2 mm, though interobserver variance was a larger feature than STR-associated measurement variance. Higher resolution DRRs generated using finer CT scan STR resulted in improved observer performance at shift detection and could decrease operator-dependent geometric error. Ideally, CT STRs [Formula: see text]2 mm should be utilized for DRR generation in the head and neck.

  19. Error of the modelled peak flow of the hydraulically reconstructed 1907 flood of the Ebro River in Xerta (NE Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Lluís Ruiz-Bellet, Josep; Castelltort, Xavier; Carles Balasch, J.; Tuset, Jordi

    2016-04-01

    The estimation of the uncertainty of the results of the hydraulic modelling has been deeply analysed, but no clear methodological procedures as to its determination have been formulated when applied to historical hydrology. The main objective of this study was to calculate the uncertainty of the resulting peak flow of a typical historical flood reconstruction. The secondary objective was to identify the input variables that influenced the result the most and their contribution to peak flow total error. The uncertainty of 21-23 October 1907 flood of the Ebro River (NE Iberian Peninsula) in the town of Xerta (83,000 km2) was calculated with a series of local sensitivity analyses of the main variables affecting the resulting peak flow. Besides, in order to see to what degree the result depended on the chosen model, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation. The peak flow of 1907 flood in the Ebro River in Xerta, reconstructed with HEC-RAS, was 11500 m3·s-1 and its total error was ±31%. The most influential input variable over HEC-RAS peak flow results was water height; however, the one that contributed the most to peak flow error was Manning's n, because its uncertainty was far greater than water height's. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed. The peak flow was 12000 m3·s-1 when calculated with the 2D model Iber and 11500 m3·s-1 when calculated with the Manning equation.

  20. WE-AB-207A-04: Random Undersampled Cone Beam CT: Theoretical Analysis and a Novel Reconstruction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, C; Chen, L; Jia, X

    2016-06-15

    Purpose: Reducing x-ray exposure and speeding up data acquisition motived studies on projection data undersampling. It is an important question that for a given undersampling ratio, what the optimal undersampling approach is. In this study, we propose a new undersampling scheme: random-ray undersampling. We will mathematically analyze its projection matrix properties and demonstrate its advantages. We will also propose a new reconstruction method that simultaneously performs CT image reconstruction and projection domain data restoration. Methods: By representing projection operator under the basis of singular vectors of full projection operator, matrix representations for an undersampling case can be generated and numericalmore » singular value decomposition can be performed. We compared properties of matrices among three undersampling approaches: regular-view undersampling, regular-ray undersampling, and the proposed random-ray undersampling. To accomplish CT reconstruction for random undersampling, we developed a novel method that iteratively performs CT reconstruction and missing projection data restoration via regularization approaches. Results: For a given undersampling ratio, random-ray undersampling preserved mathematical properties of full projection operator better than the other two approaches. This translates to advantages of reconstructing CT images at lower errors. Different types of image artifacts were observed depending on undersampling strategies, which were ascribed to the unique singular vectors of the sampling operators in the image domain. We tested the proposed reconstruction algorithm on a Forbid phantom with only 30% of the projection data randomly acquired. Reconstructed image error was reduced from 9.4% in a TV method to 7.6% in the proposed method. Conclusion: The proposed random-ray undersampling is mathematically advantageous over other typical undersampling approaches. It may permit better image reconstruction at the same undersampling ratio. The novel algorithm suitable for this random-ray undersampling was able to reconstruct high-quality images.« less

  1. Beam hardening correction in CT myocardial perfusion measurement

    NASA Astrophysics Data System (ADS)

    So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim

    2009-05-01

    This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.

  2. Guided genome halving: hardness, heuristics and the history of the Hemiascomycetes.

    PubMed

    Zheng, Chunfang; Zhu, Qian; Adam, Zaky; Sankoff, David

    2008-07-01

    Some present day species have incurred a whole genome doubling event in their evolutionary history, and this is reflected today in patterns of duplicated segments scattered throughout their chromosomes. These duplications may be used as data to 'halve' the genome, i.e. to reconstruct the ancestral genome at the moment of doubling, but the solution is often highly nonunique. To resolve this problem, we take account of outgroups, external reference genomes, to guide and narrow down the search. We improve on a previous, computationally costly, 'brute force' method by adapting the genome halving algorithm of El-Mabrouk and Sankoff so that it rapidly and accurately constructs an ancestor close the outgroups, prior to a local optimization heuristic. We apply this to reconstruct the predoubling ancestor of Saccharomyces cerevisiae and Candida glabrata, guided by the genomes of three other yeasts that diverged before the genome doubling event. We analyze the results in terms (1) of the minimum evolution criterion, (2) how close the genome halving result is to the final (local) minimum and (3) how close the final result is to an ancestor manually constructed by an expert with access to additional information. We also visualize the set of reconstructed ancestors using classic multidimensional scaling to see what aspects of the two doubled and three unduplicated genomes influence the differences among the reconstructions. The experimental software is available on request.

  3. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  4. Movement trajectory smoothness is not associated with the endpoint accuracy of rapid multi-joint arm movements in young and older adults

    PubMed Central

    Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George

    2013-01-01

    The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101

  5. Solar activity as driver for the Dark Age Grand Solar Minimum

    NASA Astrophysics Data System (ADS)

    Neuhäuser, Ralph; Neuhäuser, Dagmar

    2017-04-01

    We will discuss the role of solar activity for the temperature variability from AD 550 to 840, roughly the last three centuries of the Dark Ages. This time range includes the so-called Dark Age Grand Solar Minimum, whose deep part is dated to about AD 650 to 700, which is seen in increased radiocarbon, but decreased aurora observations (and a lack of naked-eye sunspot sightings). We present historical reports on aurorae from all human cultures with written reports including East Asia, Near East (Arabia), and Europe. To classify such reports correctly, clear criteria are needed, which are also discussed. We compare our catalog of historical aurorae (and sunspots) as well as C-14 data, i.e. solar activity proxies, with temperature reconstructions (PAGES). After increased solar activity until around AD 600, we see a dearth of aurorae and increased radiocarbon production in particular in the second half of the 7th century, i.e. a typical Grand Solar Minimum. Then, after about AD 690 (the maximum in radiocarbon, the end of the Dark Age Grand Minimum), we see increased auroral activity, decreasing radiocarbon, and increasing temperature until about AD 775. At around AD 775, we see the well-known strong C-14 variability (solar activity drop), then immediately another dearth of aurorae plus high C-14, indicating another solar activity minimum. This is consistent with a temperature depression from about AD 775 on into the beginning of the 9th century. Very high solar activity is then seen in the first four decades with four aurora clusters and three simultaneous sunspot clusters, and low C-14, again also increasing temperature. The period of increasing solar activity marks the end of the so-called Dark Ages: While auroral activity increases since about AD 793, temperature starts to increase quite exactly at AD 800. We can reconstruct the Schwabe cycles with aurorae and C-14 data. In summary, we can see a clear correspondence of the variability of solar activity proxies and surface temperature reconstructions. This indicates that solar activity is an important climate driver.

  6. An investigation of reports of Controlled Flight Toward Terrain (CFTT)

    NASA Technical Reports Server (NTRS)

    Porter, R. F.; Loomis, J. P.

    1981-01-01

    Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.

  7. Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.

    PubMed

    Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash

    2017-01-01

    Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.

  8. Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction

    PubMed Central

    Sayyari, Erfan; Mirarab, Siavash

    2017-01-01

    Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods. PMID:28800608

  9. Evaluation of an active magnetic resonance tracking system for interstitial brachytherapy.

    PubMed

    Wang, Wei; Viswanathan, Akila N; Damato, Antonio L; Chen, Yue; Tse, Zion; Pan, Li; Tokuda, Junichi; Seethamraju, Ravi T; Dumoulin, Charles L; Schmidt, Ehud J; Cormack, Robert A

    2015-12-01

    In gynecologic cancers, magnetic resonance (MR) imaging is the modality of choice for visualizing tumors and their surroundings because of superior soft-tissue contrast. Real-time MR guidance of catheter placement in interstitial brachytherapy facilitates target coverage, and would be further improved by providing intraprocedural estimates of dosimetric coverage. A major obstacle to intraprocedural dosimetry is the time needed for catheter trajectory reconstruction. Herein the authors evaluate an active MR tracking (MRTR) system which provides rapid catheter tip localization and trajectory reconstruction. The authors assess the reliability and spatial accuracy of the MRTR system in comparison to standard catheter digitization using magnetic resonance imaging (MRI) and CT. The MRTR system includes a stylet with microcoils mounted on its shaft, which can be inserted into brachytherapy catheters and tracked by a dedicated MRTR sequence. Catheter tip localization errors of the MRTR system and their dependence on catheter locations and orientation inside the MR scanner were quantified with a water phantom. The distances between the tracked tip positions of the MRTR stylet and the predefined ground-truth tip positions were calculated for measurements performed at seven locations and with nine orientations. To evaluate catheter trajectory reconstruction, fifteen brachytherapy catheters were placed into a gel phantom with an embedded catheter fixation framework, with parallel or crossed paths. The MRTR stylet was then inserted sequentially into each catheter. During the removal of the MRTR stylet from within each catheter, a MRTR measurement was performed at 40 Hz to acquire the instantaneous stylet tip position, resulting in a series of three-dimensional (3D) positions along the catheter's trajectory. A 3D polynomial curve was fit to the tracked positions for each catheter, and equally spaced dwell points were then generated along the curve. High-resolution 3D MRI of the phantom was performed followed by catheter digitization based on the catheter's imaging artifacts. The catheter trajectory error was characterized in terms of the mean distance between corresponding dwell points in MRTR-generated catheter trajectory and MRI-based catheter digitization. The MRTR-based catheter trajectory reconstruction process was also performed on three gynecologic cancer patients, and then compared with catheter digitization based on MRI and CT. The catheter tip localization error increased as the MRTR stylet moved further off-center and as the stylet's orientation deviated from the main magnetic field direction. Fifteen catheters' trajectories were reconstructed by MRTR. Compared with MRI-based digitization, the mean 3D error of MRTR-generated trajectories was 1.5 ± 0.5 mm with an in-plane error of 0.7 ± 0.2 mm and a tip error of 1.7 ± 0.5 mm. MRTR resolved ambiguity in catheter assignment due to crossed catheter paths, which is a common problem in image-based catheter digitization. In the patient studies, the MRTR-generated catheter trajectory was consistent with digitization based on both MRI and CT. The MRTR system provides accurate catheter tip localization and trajectory reconstruction in the MR environment. Relative to the image-based methods, it improves the speed, safety, and reliability of the catheter trajectory reconstruction in interstitial brachytherapy. MRTR may enable in-procedural dosimetric evaluation of implant target coverage.

  10. Evaluation of an active magnetic resonance tracking system for interstitial brachytherapy

    PubMed Central

    Wang, Wei; Viswanathan, Akila N.; Damato, Antonio L.; Chen, Yue; Tse, Zion; Pan, Li; Tokuda, Junichi; Seethamraju, Ravi T.; Dumoulin, Charles L.; Schmidt, Ehud J.; Cormack, Robert A.

    2015-01-01

    Purpose: In gynecologic cancers, magnetic resonance (MR) imaging is the modality of choice for visualizing tumors and their surroundings because of superior soft-tissue contrast. Real-time MR guidance of catheter placement in interstitial brachytherapy facilitates target coverage, and would be further improved by providing intraprocedural estimates of dosimetric coverage. A major obstacle to intraprocedural dosimetry is the time needed for catheter trajectory reconstruction. Herein the authors evaluate an active MR tracking (MRTR) system which provides rapid catheter tip localization and trajectory reconstruction. The authors assess the reliability and spatial accuracy of the MRTR system in comparison to standard catheter digitization using magnetic resonance imaging (MRI) and CT. Methods: The MRTR system includes a stylet with microcoils mounted on its shaft, which can be inserted into brachytherapy catheters and tracked by a dedicated MRTR sequence. Catheter tip localization errors of the MRTR system and their dependence on catheter locations and orientation inside the MR scanner were quantified with a water phantom. The distances between the tracked tip positions of the MRTR stylet and the predefined ground-truth tip positions were calculated for measurements performed at seven locations and with nine orientations. To evaluate catheter trajectory reconstruction, fifteen brachytherapy catheters were placed into a gel phantom with an embedded catheter fixation framework, with parallel or crossed paths. The MRTR stylet was then inserted sequentially into each catheter. During the removal of the MRTR stylet from within each catheter, a MRTR measurement was performed at 40 Hz to acquire the instantaneous stylet tip position, resulting in a series of three-dimensional (3D) positions along the catheter’s trajectory. A 3D polynomial curve was fit to the tracked positions for each catheter, and equally spaced dwell points were then generated along the curve. High-resolution 3D MRI of the phantom was performed followed by catheter digitization based on the catheter’s imaging artifacts. The catheter trajectory error was characterized in terms of the mean distance between corresponding dwell points in MRTR-generated catheter trajectory and MRI-based catheter digitization. The MRTR-based catheter trajectory reconstruction process was also performed on three gynecologic cancer patients, and then compared with catheter digitization based on MRI and CT. Results: The catheter tip localization error increased as the MRTR stylet moved further off-center and as the stylet’s orientation deviated from the main magnetic field direction. Fifteen catheters’ trajectories were reconstructed by MRTR. Compared with MRI-based digitization, the mean 3D error of MRTR-generated trajectories was 1.5 ± 0.5 mm with an in-plane error of 0.7 ± 0.2 mm and a tip error of 1.7 ± 0.5 mm. MRTR resolved ambiguity in catheter assignment due to crossed catheter paths, which is a common problem in image-based catheter digitization. In the patient studies, the MRTR-generated catheter trajectory was consistent with digitization based on both MRI and CT. Conclusions: The MRTR system provides accurate catheter tip localization and trajectory reconstruction in the MR environment. Relative to the image-based methods, it improves the speed, safety, and reliability of the catheter trajectory reconstruction in interstitial brachytherapy. MRTR may enable in-procedural dosimetric evaluation of implant target coverage. PMID:26632065

  11. Evaluation of an active magnetic resonance tracking system for interstitial brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wei, E-mail: wwang21@partners.org; Viswanathan, Akila N.; Damato, Antonio L.

    2015-12-15

    Purpose: In gynecologic cancers, magnetic resonance (MR) imaging is the modality of choice for visualizing tumors and their surroundings because of superior soft-tissue contrast. Real-time MR guidance of catheter placement in interstitial brachytherapy facilitates target coverage, and would be further improved by providing intraprocedural estimates of dosimetric coverage. A major obstacle to intraprocedural dosimetry is the time needed for catheter trajectory reconstruction. Herein the authors evaluate an active MR tracking (MRTR) system which provides rapid catheter tip localization and trajectory reconstruction. The authors assess the reliability and spatial accuracy of the MRTR system in comparison to standard catheter digitization usingmore » magnetic resonance imaging (MRI) and CT. Methods: The MRTR system includes a stylet with microcoils mounted on its shaft, which can be inserted into brachytherapy catheters and tracked by a dedicated MRTR sequence. Catheter tip localization errors of the MRTR system and their dependence on catheter locations and orientation inside the MR scanner were quantified with a water phantom. The distances between the tracked tip positions of the MRTR stylet and the predefined ground-truth tip positions were calculated for measurements performed at seven locations and with nine orientations. To evaluate catheter trajectory reconstruction, fifteen brachytherapy catheters were placed into a gel phantom with an embedded catheter fixation framework, with parallel or crossed paths. The MRTR stylet was then inserted sequentially into each catheter. During the removal of the MRTR stylet from within each catheter, a MRTR measurement was performed at 40 Hz to acquire the instantaneous stylet tip position, resulting in a series of three-dimensional (3D) positions along the catheter’s trajectory. A 3D polynomial curve was fit to the tracked positions for each catheter, and equally spaced dwell points were then generated along the curve. High-resolution 3D MRI of the phantom was performed followed by catheter digitization based on the catheter’s imaging artifacts. The catheter trajectory error was characterized in terms of the mean distance between corresponding dwell points in MRTR-generated catheter trajectory and MRI-based catheter digitization. The MRTR-based catheter trajectory reconstruction process was also performed on three gynecologic cancer patients, and then compared with catheter digitization based on MRI and CT. Results: The catheter tip localization error increased as the MRTR stylet moved further off-center and as the stylet’s orientation deviated from the main magnetic field direction. Fifteen catheters’ trajectories were reconstructed by MRTR. Compared with MRI-based digitization, the mean 3D error of MRTR-generated trajectories was 1.5 ± 0.5 mm with an in-plane error of 0.7 ± 0.2 mm and a tip error of 1.7 ± 0.5 mm. MRTR resolved ambiguity in catheter assignment due to crossed catheter paths, which is a common problem in image-based catheter digitization. In the patient studies, the MRTR-generated catheter trajectory was consistent with digitization based on both MRI and CT. Conclusions: The MRTR system provides accurate catheter tip localization and trajectory reconstruction in the MR environment. Relative to the image-based methods, it improves the speed, safety, and reliability of the catheter trajectory reconstruction in interstitial brachytherapy. MRTR may enable in-procedural dosimetric evaluation of implant target coverage.« less

  12. Climate-related relative sea-level changes from Chesapeake Bay, U.S. Atlantic coast

    NASA Astrophysics Data System (ADS)

    Shaw, Timothy; Horton, Benjamin; Kemp, Andrew; Cahill, Niamh; Mann, Michael; Engelhart, Simon; Kopp, Robert; Brain, Matthew; Clear, Jennifer; Corbett, Reide; Nikitina, Daria; Garcia-Artola, Ane; Walker, Jennifer

    2017-04-01

    Proxy-based reconstructions of relative sea level (RSL) from the coastlines of the North Atlantic have revealed spatial and temporal variability in the rates of RSL rise during periods of known Late-Holocene climatic variability. Regional driving mechanisms for such variability include glacial isostatic adjustment, static-equilibrium of land-ice changes and/or ocean dynamic effects as well as more localized factors (e.g. sediment compaction and tidal range change). We present a 4000-year RSL reconstruction from salt-marsh sediments of the Chesapeake Bay using a foraminiferal-based transfer function and a composite chronology. A local contemporary training set of foraminifera was developed to calibrate fossil counterparts and provide estimates of paleo marsh elevation with vertical uncertainties of ±0.06m. A composite chronology combining 30 radiocarbon dates, pollen chronohorizons, regional pollution histories, and short-lived radionuclides was placed into a Bayesian age-depth framework yielding low temporal uncertainties averaging 40 years. A compression-only geotechnical model was applied to decompact the RSL record. We coupled the proxy reconstruction with direct observations from nearby tide gauge records before rates of RSL rise were quantified through application of an Errors-In-Variables Integrated Gaussian Process model. The RSL history for Chesapeake Bay shows 6 m of rise since 2000 BCE. Between 2000 BCE and 1300 BCE, rates of RSL increasing to 1.4 mm/yr precede a significant decrease to 0.8 mm/yr at 700 BCE. This minimum coincides with widespread climate cooling identified in multiple paleoclimate archives of the North Atlantic. An increase in the rate of RSL rise to 2.1 mm/yr at 200 CE similarly precedes a decrease in the rate of RSL rise at 1450 CE (1.3 mm/yr) that coincides with the Little Ice Age. Modern rates of RSL rise (3.6 mm/yr) are the fastest observed in the past 4000 years. The temporal length and decadal resolution of the RSL reconstruction further reconciles the response of sea levels to late Holocene climate variability.

  13. Reconstruction of primordial tensor power spectra from B -mode polarization of the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Takashi; Komatsu, Eiichiro; Hazumi, Masashi; Sasaki, Misao

    2018-06-01

    Given observations of the B -mode polarization power spectrum of the cosmic microwave background (CMB), we can reconstruct power spectra of primordial tensor modes from the early Universe without assuming their functional form such as a power-law spectrum. The shape of the reconstructed spectra can then be used to probe the origin of tensor modes in a model-independent manner. We use the Fisher matrix to calculate the covariance matrix of tensor power spectra reconstructed in bins. We find that the power spectra are best reconstructed at wave numbers in the vicinity of k ≈6 ×10-4 and 5 ×10-3 Mpc-1 , which correspond to the "reionization bump" at ℓ≲6 and "recombination bump" at ℓ≈80 of the CMB B -mode power spectrum, respectively. The error bar between these two wave numbers is larger because of the lack of the signal between the reionization and recombination bumps. The error bars increase sharply toward smaller (larger) wave numbers because of the cosmic variance (CMB lensing and instrumental noise). To demonstrate the utility of the reconstructed power spectra, we investigate whether we can distinguish between various sources of tensor modes including those from the vacuum metric fluctuation and SU(2) gauge fields during single-field slow-roll inflation, open inflation, and massive gravity inflation. The results depend on the model parameters, but we find that future CMB experiments are sensitive to differences in these models. We make our calculation tool available online.

  14. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  15. Penalized maximum likelihood simultaneous longitudinal PET image reconstruction with difference-image priors.

    PubMed

    Ellis, Sam; Reader, Andrew J

    2018-04-26

    Many clinical contexts require the acquisition of multiple positron emission tomography (PET) scans of a single subject, for example, to observe and quantitate changes in functional behaviour in tumors after treatment in oncology. Typically, the datasets from each of these scans are reconstructed individually, without exploiting the similarities between them. We have recently shown that sharing information between longitudinal PET datasets by penalizing voxel-wise differences during image reconstruction can improve reconstructed images by reducing background noise and increasing the contrast-to-noise ratio of high-activity lesions. Here, we present two additional novel longitudinal difference-image priors and evaluate their performance using two-dimesional (2D) simulation studies and a three-dimensional (3D) real dataset case study. We have previously proposed a simultaneous difference-image-based penalized maximum likelihood (PML) longitudinal image reconstruction method that encourages sparse difference images (DS-PML), and in this work we propose two further novel prior terms. The priors are designed to encourage longitudinal images with corresponding differences which have (a) low entropy (DE-PML), and (b) high sparsity in their spatial gradients (DTV-PML). These two new priors and the originally proposed longitudinal prior were applied to 2D-simulated treatment response [ 18 F]fluorodeoxyglucose (FDG) brain tumor datasets and compared to standard maximum likelihood expectation-maximization (MLEM) reconstructions. These 2D simulation studies explored the effects of penalty strengths, tumor behaviour, and interscan coupling on reconstructed images. Finally, a real two-scan longitudinal data series acquired from a head and neck cancer patient was reconstructed with the proposed methods and the results compared to standard reconstruction methods. Using any of the three priors with an appropriate penalty strength produced images with noise levels equivalent to those seen when using standard reconstructions with increased counts levels. In tumor regions, each method produces subtly different results in terms of preservation of tumor quantitation and reconstruction root mean-squared error (RMSE). In particular, in the two-scan simulations, the DE-PML method produced tumor means in close agreement with MLEM reconstructions, while the DTV-PML method produced the lowest errors due to noise reduction within the tumor. Across a range of tumor responses and different numbers of scans, similar results were observed, with DTV-PML producing the lowest errors of the three priors and DE-PML producing the lowest bias. Similar improvements were observed in the reconstructions of the real longitudinal datasets, although imperfect alignment of the two PET images resulted in additional changes in the difference image that affected the performance of the proposed methods. Reconstruction of longitudinal datasets by penalizing difference images between pairs of scans from a data series allows for noise reduction in all reconstructed images. An appropriate choice of penalty term and penalty strength allows for this noise reduction to be achieved while maintaining reconstruction performance in regions of change, either in terms of quantitation of mean intensity via DE-PML, or in terms of tumor RMSE via DTV-PML. Overall, improving the image quality of longitudinal datasets via simultaneous reconstruction has the potential to improve upon currently used methods, allow dose reduction, or reduce scan time while maintaining image quality at current levels. © 2018 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  16. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  17. ["Long-branch Attraction" artifact in phylogenetic reconstruction].

    PubMed

    Li, Yi-Wei; Yu, Li; Zhang, Ya-Ping

    2007-06-01

    Phylogenetic reconstruction among various organisms not only helps understand their evolutionary history but also reveal several fundamental evolutionary questions. Understanding of the evolutionary relationships among organisms establishes the foundation for the investigations of other biological disciplines. However, almost all the widely used phylogenetic methods have limitations which fail to eliminate systematic errors effectively, preventing the reconstruction of true organismal relationships. "Long-branch Attraction" (LBA) artifact is one of the most disturbing factors in phylogenetic reconstruction. In this review, the conception and analytic method as well as the avoidance strategy of LBA were summarized. In addition, several typical examples were provided. The approach to avoid and resolve LBA artifact has been discussed.

  18. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  19. Recording 360 Degree Holograms in the Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Stirn, Bradley A.

    1975-01-01

    Describes an experiment for recording holograms using a minimum of costly apparatus. Includes a description of apparatus and materials, the procedure for recording the hologram, the processing of the hologram, and the reconstruction of the image. (GS)

  20. Weighted minimum-norm source estimation of magnetoencephalography utilizing the temporal information of the measured data

    NASA Astrophysics Data System (ADS)

    Iwaki, Sunao; Ueno, Shoogo

    1998-06-01

    The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.

  1. DART: a practical reconstruction algorithm for discrete tomography.

    PubMed

    Batenburg, Kees Joost; Sijbers, Jan

    2011-09-01

    In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.

  2. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  3. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  4. Continuous slope-area discharge records in Maricopa County, Arizona, 2004–2012

    USGS Publications Warehouse

    Wiele, Stephen M.; Heaton, John W.; Bunch, Claire E.; Gardner, David E.; Smith, Christopher F.

    2015-12-29

    Analyses of sources of errors and the impact stage data errors have on calculated discharge time series are considered, along with issues in data reduction. Steeper, longer stream reaches are generally less sensitive to measurement error. Other issues considered are pressure transducer drawdown, capture of flood peaks with discrete stage data, selection of stage record for development of rating curves, and minimum stages for the calculation of discharge.

  5. Errors due to the truncation of the computational domain in static three-dimensional electrical impedance tomography.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Kaipio, J P

    2000-02-01

    In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.

  6. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.

    PubMed

    López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.

  7. Kernel temporal enhancement approach for LORETA source reconstruction using EEG data.

    PubMed

    Torres-Valencia, Cristian A; Santamaria, M Claudia Joana; Alvarez, Mauricio A

    2016-08-01

    Reconstruction of brain sources from magnetoencephalography and electroencephalography (M/EEG) data is a well known problem in the neuroengineering field. A inverse problem should be solved and several methods have been proposed. Low Resolution Electromagnetic Tomography (LORETA) and the different variations proposed as standardized LORETA (sLORETA) and the standardized weighted LORETA (swLORETA) have solved the inverse problem following a non-parametric approach, that is by setting dipoles in the whole brain domain in order to estimate the dipole positions from the M/EEG data and assuming some spatial priors. Errors in the reconstruction of sources are presented due the low spatial resolution of the LORETA framework and the influence of noise in the observable data. In this work a kernel temporal enhancement (kTE) is proposed in order to build a preprocessing stage of the data that allows in combination with the swLORETA method a improvement in the source reconstruction. The results are quantified in terms of three dipole error localization metrics and the strategy of swLORETA + kTE obtained the best results across different signal to noise ratio (SNR) in random dipoles simulation from synthetic EEG data.

  8. Monte Carlo simulation of errors in the anisotropy of magnetic susceptibility - A second-rank symmetric tensor. [for grains in sedimentary and volcanic rocks

    NASA Technical Reports Server (NTRS)

    Lienert, Barry R.

    1991-01-01

    Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.

  9. A constrained-gradient method to control divergence errors in numerical MHD

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  10. SINOMA - A new iterative statistical approach for the identification of linear relationships between noisy time series

    NASA Astrophysics Data System (ADS)

    Thees, Barnim; Buras, Allan; Jetschke, Gottfried; Kutzbach, Lars; Zorita, Eduardo; Wilmking, Martin

    2014-05-01

    In paleoclimatology, reconstructions of environmental conditions play a significant role. Such reconstructions rely on the relationship between proxies (e.g. tree-rings, lake sediments) and the processes which are to be reconstructed (e.g. temperature, precipitation, solar activity). However, both of these variable types in general are noisy. For instance, ring-width is only a proxy for tree growth and further determined by several other environmental signals (e.g. precipitation, length of growing season, competition). On the other hand, records of process data that are to be reconstructed are mostly available for too short periods (too short in terms of calibration) at the particular site at which the proxy data have been sampled. The resulting 'spatial' noise (e.g. by using climate station data not situated at the proxy site) causes additional errors in the relationship between measured proxy data and available process data (e.g. Kutzbach et al., 2011). If deriving models from such noisy data, Thees et al. (2009) and Kutzbach et al. (2011) could show (amongst others), that model slopes (the factor with which the one variable is multiplied to predict the other variable) in most cases are misestimated - depending on the ratio of the variances of the respective variable noises. Despite these facts, many recent reconstructions are based on ordinary least squares regressions, which underestimate model slopes as they do not account for the noise in the predictor variable (Kutzbach et al., 2011). This is because there yet only are few methodological approaches available to treat noisy data in terms of modeling, and for those methods additional information (e.g. a good estimate of the error noise ratio) which often is impossible to acquire is needed. Here we introduce the Sequential Iterative NOise Matching Algorithm - SINOMA - with which we are able to derive good estimates for model slopes between noisy time series. The mathematical background of SINOMA is described accompanied by a successful application to a pseudo-proxy dataset of which the error noise conditions and true model parameters are known. Further examples on its successful application are intended for presentation in another contribution to this EGU session (Buras et al., 2014) which aims at representing SINOMAs range of applicability rather than its theoretical background which is the focus of the herewith submitted contribution. Given the features of yet published paleoclimatological reconstructions (mostly ordinary least squares regression) and the generally noisy characteristics of process and proxy data, SINOMA has the potential to change our understanding of past climate variability. This is because the magnitude of amplitudes in reconstructed climate parameters may change significantly as soon as comparably precise slope estimates (as acquired by SINOMA) are used for reconstructions. Therefore, SINOMA has the potential to reframe our picture of the past. References Kutzbach, L., Thees, B., and Wilmking, M., 2011: Identification of linear relationships from noisy data using errors-in-variables models - relevance for reconstruction of past climate from tree-ring and other proxy information. Climatic Change 105, 155-177. Thees, B., Kutzbach, L., Wilmking, M., Zorita, E., 2009: Ein Bewertungsmaß für die amplitudentreue regressive Abbildung von verrauschten Daten im Rahmen einer iterativen "Errors in Variables" Modellierung (EVM). GKSS Reports 2009/8. GKSS-Forschungszentrum Geesthacht GmbH, Geesthacht, Germany, 20 pp (in German) Allan Buras, Barnim Thees, Markus Czymzik, Nadine Dräger, Ulrike Kienel, Ina Neugebauer, Florian Ott, Tobias Scharnweber, Sonia Simard, Michal Slowinski, Sandra Slowinski, Izabela Zawiska, and Martin Wilmking, 2014: SINOMA - a better tool for proxy based reconstructions? Abstract submitted to EGU-session CL 6.1.

  11. 3D equilibrium reconstruction with islands

    DOE PAGES

    Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; ...

    2018-02-15

    This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less

  12. 3D equilibrium reconstruction with islands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cianciosa, M.; Hirshman, S. P.; Seal, S. K.

    This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less

  13. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  14. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    PubMed Central

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-01-01

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195

  15. Accuracy and Precision of a Surgical Navigation System: Effect of Camera and Patient Tracker Position and Number of Active Markers.

    PubMed

    Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P

    2017-01-01

    Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.

  16. Anatomical frame identification and reconstruction for repeatable lower limb joint kinematics estimates.

    PubMed

    Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio

    2008-07-19

    The quantitative description of joint mechanics during movement requires the reconstruction of the position and orientation of selected anatomical axes with respect to a laboratory reference frame. These anatomical axes are identified through an ad hoc anatomical calibration procedure and their position and orientation are reconstructed relative to bone-embedded frames normally derived from photogrammetric marker positions and used to describe movement. The repeatability of anatomical calibration, both within and between subjects, is crucial for kinematic and kinetic end results. This paper illustrates an anatomical calibration approach, which does not require anatomical landmark manual palpation, described in the literature to be prone to great indeterminacy. This approach allows for the estimate of subject-specific bone morphology and automatic anatomical frame identification. The experimental procedure consists of digitization through photogrammetry of superficial points selected over the areas of the bone covered with a thin layer of soft tissue. Information concerning the location of internal anatomical landmarks, such as a joint center obtained using a functional approach, may also be added. The data thus acquired are matched with the digital model of a deformable template bone. Consequently, the repeatability of pelvis, knee and hip joint angles is determined. Five volunteers, each of whom performed five walking trials, and six operators, with no specific knowledge of anatomy, participated in the study. Descriptive statistics analysis was performed during upright posture, showing a limited dispersion of all angles (less than 3 deg) except for hip and knee internal-external rotation (6 deg and 9 deg, respectively). During level walking, the ratio of inter-operator and inter-trial error and an absolute subject-specific repeatability were assessed. For pelvic and hip angles, and knee flexion-extension the inter-operator error was equal to the inter-trial error-the absolute error ranging from 0.1 deg to 0.9 deg. Knee internal-external rotation and ab-adduction showed, on average, inter-operator errors, which were 8% and 28% greater than the relevant inter-trial errors, respectively. The absolute error was in the range 0.9-2.9 deg.

  17. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  18. Clinical implementation of an exit detector-based dose reconstruction tool for helical tomotherapy delivery quality assurance.

    PubMed

    Deshpande, Shrikant; Xing, Aitang; Metcalfe, Peter; Holloway, Lois; Vial, Philip; Geurts, Mark

    2017-10-01

    The aim of this study was to validate the accuracy of an exit detector-based dose reconstruction tool for helical tomotherapy (HT) delivery quality assurance (DQA). Exit detector-based DQA tool was developed for patient-specific HT treatment verification. The tool performs a dose reconstruction on the planning image using the sinogram measured by the HT exit detector with no objects in the beam (i.e., static couch), and compares the reconstructed dose to the planned dose. Vendor supplied (three "TomoPhant") plans with a cylindrical solid water ("cheese") phantom were used for validation. Each "TomoPhant" plan was modified with intentional multileaf collimator leaf open time (MLC LOT) errors to assess the sensitivity and robustness of this tool. Four scenarios were tested; leaf 32 was "stuck open," leaf 42 was "stuck open," random leaf LOT was closed first by mean values of 2% and then 4%. A static couch DQA procedure was then run five times (once with the unmodified sinogram and four times with modified sinograms) for each of the three "TomoPhant" treatment plans. First, the original optimized delivery plan was compared with the original machine agnostic delivery plan, then the original optimized plans with a known modification applied (intentional MLC LOT error) were compared to the corresponding error plan exit detector measurements. An absolute dose comparison between calculated and ion chamber (A1SL, Standard Imaging, Inc., WI, USA) measured dose was performed for the unmodified "TomoPhant" plans. A 3D gamma evaluation (2%/2 mm global) was performed by comparing the planned dose ("original planned dose" for unmodified plans and "adjusted planned dose" for each intentional error) to exit detector-reconstructed dose for all three "Tomophant" plans. Finally, DQA for 119 clinical (treatment length <25 cm) and three cranio-spinal irradiation (CSI) plans were measured with both the ArcCHECK phantom (Sun Nuclear Corp., Melbourne, FL, USA) and the exit detector DQA tool to assess the time required for DQA and similarity between two methods. The measured ion chamber dose agreed to within 1.5% of the reconstructed dose computed by the exit detector DQA tool on a cheese phantom for all unmodified "Tomophant" plans. Excellent agreement in gamma pass rate (>95%) was observed between the planned and reconstructed dose for all "Tomophant" plans considered using the tool. The gamma pass rate from 119 clinical plan DQA measurements was 94.9% ± 1.5% and 91.9% ± 4.37% for the exit detector DQA tool and ArcCHECK phantom measurements (P = 0.81), respectively. For the clinical plans (treatment length <25 cm), the average time required to perform DQA was 24.7 ± 3.5 and 39.5 ± 4.5 min using the exit detector QA tool and ArcCHECK phantom, respectively, whereas the average time required for the 3 CSI treatments was 35 ± 3.5 and 90 ± 5.2 min, respectively. The exit detector tool has been demonstrated to be faster for performing the DQA with equivalent sensitivity for detecting MLC LOT errors relative to a conventional phantom-based QA method. In addition, comprehensive MLC performance evaluation and features of reconstructed dose provide additional insight into understanding DQA failures and the clinical relevance of DQA results. © 2017 American Association of Physicists in Medicine.

  19. Towards Complete, Geo-Referenced 3d Models from Crowd-Sourced Amateur Images

    NASA Astrophysics Data System (ADS)

    Hartmann, W.; Havlena, M.; Schindler, K.

    2016-06-01

    Despite a lot of recent research, photogrammetric reconstruction from crowd-sourced imagery is plagued by a number of recurrent problems. (i) The resulting models are chronically incomplete, because even touristic landmarks are photographed mostly from a few "canonical" viewpoints. (ii) Man-made constructions tend to exhibit repetitive structure and rotational symmetries, which lead to gross errors in the 3D reconstruction and aggravate the problem of incomplete reconstruction. (iii) The models are normally not geo-referenced. In this paper, we investigate the possibility of using sparse GNSS geo-tags from digital cameras to address these issues and push the boundaries of crowd-sourced photogrammetry. A small proportion of the images in Internet collections (≍ 10 %) do possess geo-tags. While the individual geo-tags are very inaccurate, they nevertheless can help to address the problems above. By providing approximate geo-reference for partial reconstructions they make it possible to fuse those pieces into more complete models; the capability to fuse partial reconstruction opens up the possibility to be more restrictive in the matching phase and avoid errors due to repetitive structure; and collectively, the redundant set of low-quality geo-tags can provide reasonably accurate absolute geo-reference. We show that even few, noisy geo-tags can help to improve architectural models, compared to puristic structure-from-motion only based on image correspondence.

  20. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  1. White light-informed optical properties improve ultrasound-guided fluorescence tomography of photoactive protoporphyrin IX

    NASA Astrophysics Data System (ADS)

    Flynn, Brendan P.; DSouza, Alisha V.; Kanick, Stephen C.; Davis, Scott C.; Pogue, Brian W.

    2013-04-01

    Subsurface fluorescence imaging is desirable for medical applications, including protoporphyrin-IX (PpIX)-based skin tumor diagnosis, surgical guidance, and dosimetry in photodynamic therapy. While tissue optical properties and heterogeneities make true subsurface fluorescence mapping an ill-posed problem, ultrasound-guided fluorescence-tomography (USFT) provides regional fluorescence mapping. Here USFT is implemented with spectroscopic decoupling of fluorescence signals (auto-fluorescence, PpIX, photoproducts), and white light spectroscopy-determined bulk optical properties. Segmented US images provide a priori spatial information for fluorescence reconstruction using region-based, diffuse FT. The method was tested in simulations, tissue homogeneous and inclusion phantoms, and an injected-inclusion animal model. Reconstructed fluorescence yield was linear with PpIX concentration, including the lowest concentration used, 0.025 μg/ml. White light spectroscopy informed optical properties, which improved fluorescence reconstruction accuracy compared to the use of fixed, literature-based optical properties, reduced reconstruction error and reconstructed fluorescence standard deviation by factors of 8.9 and 2.0, respectively. Recovered contrast-to-background error was 25% and 74% for inclusion phantoms without and with a 2-mm skin-like layer, respectively. Preliminary mouse-model imaging demonstrated system feasibility for subsurface fluorescence measurement in vivo. These data suggest that this implementation of USFT is capable of regional PpIX mapping in human skin tumors during photodynamic therapy, to be used in dosimetric evaluations.

  2. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.

  3. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  4. Three-dimensional brain reconstruction of in vivo electrode tracks for neuroscience and neural prosthetic applications

    PubMed Central

    Markovitz, Craig D.; Tang, Tien T.; Edge, David P.; Lim, Hubert H.

    2012-01-01

    The brain is a densely interconnected network that relies on populations of neurons within and across multiple nuclei to code for features leading to perception and action. However, the neurophysiology field is still dominated by the characterization of individual neurons, rather than simultaneous recordings across multiple regions, without consistent spatial reconstruction of their locations for comparisons across studies. There are sophisticated histological and imaging techniques for performing brain reconstructions. However, what is needed is a method that is relatively easy and inexpensive to implement in a typical neurophysiology lab and provides consistent identification of electrode locations to make it widely used for pooling data across studies and research groups. This paper presents our initial development of such an approach for reconstructing electrode tracks and site locations within the guinea pig inferior colliculus (IC) to identify its functional organization for frequency coding relevant for a new auditory midbrain implant (AMI). Encouragingly, the spatial error associated with different individuals reconstructing electrode tracks for the same midbrain was less than 65 μm, corresponding to an error of ~1.5% relative to the entire IC structure (~4–5 mm diameter sphere). Furthermore, the reconstructed frequency laminae of the IC were consistently aligned across three sampled midbrains, demonstrating the ability to use our method to combine location data across animals. Hopefully, through further improvements in our reconstruction method, it can be used as a standard protocol across neurophysiology labs to characterize neural data not only within the IC but also within other brain regions to help bridge the gap between cellular activity and network function. Clinically, correlating function with location within and across multiple brain regions can guide optimal placement of electrodes for the growing field of neural prosthetics. PMID:22754502

  5. Non-iterative volumetric particle reconstruction near moving bodies

    NASA Astrophysics Data System (ADS)

    Mendelson, Leah; Techet, Alexandra

    2017-11-01

    When multi-camera 3D PIV experiments are performed around a moving body, the body often obscures visibility of regions of interest in the flow field in a subset of cameras. We evaluate the performance of non-iterative particle reconstruction algorithms used for synthetic aperture PIV (SAPIV) in these partially-occluded regions. We show that when partial occlusions are present, the quality and availability of 3D tracer particle information depends on the number of cameras and reconstruction procedure used. Based on these findings, we introduce an improved non-iterative reconstruction routine for SAPIV around bodies. The reconstruction procedure combines binary masks, already required for reconstruction of the body's 3D visual hull, and a minimum line-of-sight algorithm. This approach accounts for partial occlusions without performing separate processing for each possible subset of cameras. We combine this reconstruction procedure with three-dimensional imaging on both sides of the free surface to reveal multi-fin wake interactions generated by a jumping archer fish. Sufficient particle reconstruction in near-body regions is crucial to resolving the wake structures of upstream fins (i.e., dorsal and anal fins) before and during interactions with the caudal tail.

  6. Short version of the Depression Anxiety Stress Scale-21: is it valid for Brazilian adolescents?

    PubMed Central

    da Silva, Hítalo Andrade; dos Passos, Muana Hiandra Pereira; de Oliveira, Valéria Mayaly Alves; Palmeira, Aline Cabral; Pitangui, Ana Carolina Rodarti; de Araújo, Rodrigo Cappato

    2016-01-01

    ABSTRACT Objective To evaluate the interday reproducibility, agreement and validity of the construct of short version of the Depression Anxiety Stress Scale-21 applied to adolescents. Methods The sample consisted of adolescents of both sexes, aged between 10 and 19 years, who were recruited from schools and sports centers. The validity of the construct was performed by exploratory factor analysis, and reliability was calculated for each construct using the intraclass correlation coefficient, standard error of measurement and the minimum detectable change. Results The factor analysis combining the items corresponding to anxiety and stress in a single factor, and depression in a second factor, showed a better match of all 21 items, with higher factor loadings in their respective constructs. The reproducibility values for depression were intraclass correlation coefficient with 0.86, standard error of measurement with 0.80, and minimum detectable change with 2.22; and, for anxiety/stress: intraclass correlation coefficient with 0.82, standard error of measurement with 1.80, and minimum detectable change with 4.99. Conclusion The short version of the Depression Anxiety Stress Scale-21 showed excellent values of reliability, and strong internal consistency. The two-factor model with condensation of the constructs anxiety and stress in a single factor was the most acceptable for the adolescent population. PMID:28076595

  7. Wave-optics description of self-healing mechanism in Bessel beams.

    PubMed

    Aiello, Andrea; Agarwal, Girish S

    2014-12-15

    Bessel beams' great importance in optics lies in that these propagate without spreading and can reconstruct themselves behind an obstruction placed across their path. However, a rigorous wave-optics explanation of the latter property is missing. In this work, we study the reconstruction mechanism by means of a wave-optics description. We obtain expressions for the minimum distance beyond the obstruction at which the beam reconstructs itself, which are in close agreement with the traditional one determined from geometrical optics. Our results show that the physics underlying the self-healing mechanism can be entirely explained in terms of the propagation of plane waves with radial wave vectors lying on a ring.

  8. Error Characterization of Flight Trajectories Reconstructed Using Structure from Motion

    DTIC Science & Technology

    2015-03-27

    adjustment using IMU rotation information, the accuracy of the yaw, pitch and roll is limited and numerical errors can be as high as 1e-4 depending on...due to either zero mean, Gaussian noise and/or bias in the IMU measured yaw, pitch and roll angles. It is possible that when errors in these...requires both the information on how the camera is mounted to the IMU /aircraft and the measured yaw, pitch and roll at the time of the first image

  9. An intersecting chord method for minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Zhang, Qing; Liang, Lin; Liu, Dan

    2015-11-01

    As one of the Geometrical Product Specifications that are widely applied in industrial manufacturing and measurement, sphericity error can synthetically scale a 3D structure and reflects the machining quality of a spherical workpiece. Following increasing demands in the high motion performance of spherical parts, sphericity error is becoming an indispensable component in the evaluation of form error. However, the evaluation of sphericity error is still considered to be a complex mathematical issue, and the related research studies on the development of available models are lacking. In this paper, an intersecting chord method is first proposed to solve the minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error. This new modelling method leverages chord relationships to replace the characteristic points, thereby significantly reducing the computational complexity and improving the computational efficiency. Using the intersecting chords to generate a virtual centre, the reference sphere in two concentric spheres is simplified as a space intersecting structure. The position of the virtual centre on the space intersecting structure is determined by characteristic chords, which may reduce the deviation between the virtual centre and the centre of the reference sphere. In addition,two experiments are used to verify the effectiveness of the proposed method with real datasets from the Cartesian coordinates. The results indicate that the estimated errors are in perfect agreement with those of the published methods. Meanwhile, the computational efficiency is improved. For the evaluation of the sphericity error, the use of high performance computing is a remarkable change.

  10. Variance in age-specific sex composition of Pacific halibut catches, and comparison of statistical and genetic methods for reconstructing sex ratios

    NASA Astrophysics Data System (ADS)

    Loher, Timothy; Woods, Monica A.; Jimenez-Hidalgo, Isadora; Hauser, Lorenz

    2016-01-01

    Declines in size at age of Pacific halibut Hippoglossus stenolepis, in concert with sexually-dimorphic growth and a constant minimum commercial size limit, have led to the expectation that the sex composition of commercial catches should be increasingly female-biased. Sensitivity analyses suggest that variance in sex composition of landings may be the most influential source of uncertainty affecting current understanding of spawning stock biomass. However, there is no reliable way to determine sex at landing because all halibut are eviscerated at sea. In 2014, a statistical method based on survey data was developed to estimate the probability that fish of any given length at age (LAA) would be female, derived from the fundamental observation that large, young fish are likely female whereas small, old fish have a high probability of being male. Here, we examine variability in age-specific sex composition using at-sea commercial and closed-season survey catches, and compare the accuracy of the survey-based LAA technique to genetic markers for reconstructing the sex composition of catches. Sexing by LAA performed best for summer-collected samples, consistent with the hypothesis that the ability to characterize catches can be influenced by seasonal demographic shifts. Additionally, differences between survey and commercial selectivity that allow fishers to harvest larger fish within cohorts may generate important mismatch between survey and commercial datasets. Length-at-age-based estimates ranged from 4.7% underestimation of female proportion to 12.0% overestimation, with mean error of 5.8 ± 1.5%. Ratios determined by genetics were closer to true sample proportions and displayed less variability; estimation to within < 1% of true ratios was limited to genetics. Genetic estimation of female proportions ranged from 4.9% underestimation to 2.5% overestimation, with a mean absolute error of 1.2 ± 1.2%. Males were generally more difficult to assign than females: 6.7% of males and 3.4% of females were incorrectly assigned. Although nuclear microsatellites proved more consistent at partitioning catches by sex, we recommend that SNP assays be developed to allow for rapid, cost-effective, and accurate sex identification.

  11. Haplotype assembly in polyploid genomes and identical by descent shared tracts.

    PubMed

    Aguiar, Derek; Istrail, Sorin

    2013-07-01

    Genome-wide haplotype reconstruction from sequence data, or haplotype assembly, is at the center of major challenges in molecular biology and life sciences. For complex eukaryotic organisms like humans, the genome is vast and the population samples are growing so rapidly that algorithms processing high-throughput sequencing data must scale favorably in terms of both accuracy and computational efficiency. Furthermore, current models and methodologies for haplotype assembly (i) do not consider individuals sharing haplotypes jointly, which reduces the size and accuracy of assembled haplotypes, and (ii) are unable to model genomes having more than two sets of homologous chromosomes (polyploidy). Polyploid organisms are increasingly becoming the target of many research groups interested in the genomics of disease, phylogenetics, botany and evolution but there is an absence of theory and methods for polyploid haplotype reconstruction. In this work, we present a number of results, extensions and generalizations of compass graphs and our HapCompass framework. We prove the theoretical complexity of two haplotype assembly optimizations, thereby motivating the use of heuristics. Furthermore, we present graph theory-based algorithms for the problem of haplotype assembly using our previously developed HapCompass framework for (i) novel implementations of haplotype assembly optimizations (minimum error correction), (ii) assembly of a pair of individuals sharing a haplotype tract identical by descent and (iii) assembly of polyploid genomes. We evaluate our methods on 1000 Genomes Project, Pacific Biosciences and simulated sequence data. HapCompass is available for download at http://www.brown.edu/Research/Istrail_Lab/. Supplementary data are available at Bioinformatics online.

  12. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  13. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  14. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  15. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  16. The 'double headed slug flap': a simple technique to reconstruct large helical rim defects.

    PubMed

    Masud, D; Tzafetta, K

    2012-10-01

    Reconstructing partial defects of the ear can be challenging, balancing the creation of the details of the ear with scarring, morbidity and number of surgical stages. Common causes of ear defects are human bites, tumour excision and burn injuries. Reconstructing defects of the ear with tube pedicled flaps and other local flaps requires an accurate measurement of size of the defect with little room for error, particularly under estimation. We present a simple method of reconstruction for partial defects of the ear using a two-stage technique with post auricular transposition flaps. This allows for under or over estimation of size defects permitting accurate tissue usage giving good aesthetic outcomes. Copyright © 2012 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. Quantifying and correcting motion artifacts in MRI

    NASA Astrophysics Data System (ADS)

    Bones, Philip J.; Maclaren, Julian R.; Millane, Rick P.; Watts, Richard

    2006-08-01

    Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational motion results in phase errors in the data samples while rotation causes location errors. A method is presented to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs phase correlation within the overlapping segments to estimate translational motion. An extension, also based on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a measure related to the amount of energy outside the support can be used to objectively compare the severity of motion-induced artifacts.

  18. 3DNOW: Image-Based 3d Reconstruction and Modeling via Web

    NASA Astrophysics Data System (ADS)

    Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.

    2018-05-01

    This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.

  19. Time-domain wavefield reconstruction inversion

    NASA Astrophysics Data System (ADS)

    Li, Zhen-Chun; Lin, Yu-Zhao; Zhang, Kai; Li, Yuan-Yuan; Yu, Zhen-Nan

    2017-12-01

    Wavefield reconstruction inversion (WRI) is an improved full waveform inversion theory that has been proposed in recent years. WRI method expands the searching space by introducing the wave equation into the objective function and reconstructing the wavefield to update model parameters, thereby improving the computing efficiency and mitigating the influence of the local minimum. However, frequency-domain WRI is difficult to apply to real seismic data because of the high computational memory demand and requirement of time-frequency transformation with additional computational costs. In this paper, wavefield reconstruction inversion theory is extended into the time domain, the augmented wave equation of WRI is derived in the time domain, and the model gradient is modified according to the numerical test with anomalies. The examples of synthetic data illustrate the accuracy of time-domain WRI and the low dependency of WRI on low-frequency information.

  20. Real-time position reconstruction with hippocampal place cells.

    PubMed

    Guger, Christoph; Gener, Thomas; Pennartz, Cyriel M A; Brotons-Mas, Jorge R; Edlinger, Günter; Bermúdez I Badia, S; Verschure, Paul; Schaffelhofer, Stefan; Sanchez-Vives, Maria V

    2011-01-01

    Brain-computer interfaces (BCI) are using the electroencephalogram, the electrocorticogram and trains of action potentials as inputs to analyze brain activity for communication purposes and/or the control of external devices. Thus far it is not known whether a BCI system can be developed that utilizes the states of brain structures that are situated well below the cortical surface, such as the hippocampus. In order to address this question we used the activity of hippocampal place cells (PCs) to predict the position of an rodent in real-time. First, spike activity was recorded from the hippocampus during foraging and analyzed off-line to optimize the spike sorting and position reconstruction algorithm of rats. Then the spike activity was recorded and analyzed in real-time. The rat was running in a box of 80 cm × 80 cm and its locomotor movement was captured with a video tracking system. Data were acquired to calculate the rat's trajectories and to identify place fields. Then a Bayesian classifier was trained to predict the position of the rat given its neural activity. This information was used in subsequent trials to predict the rat's position in real-time. The real-time experiments were successfully performed and yielded an error between 12.2 and 17.4% using 5-6 neurons. It must be noted here that the encoding step was done with data recorded before the real-time experiment and comparable accuracies between off-line (mean error of 15.9% for three rats) and real-time experiments (mean error of 14.7%) were achieved. The experiment shows proof of principle that position reconstruction can be done in real-time, that PCs were stable and spike sorting was robust enough to generalize from the training run to the real-time reconstruction phase of the experiment. Real-time reconstruction may be used for a variety of purposes, including creating behavioral-neuronal feedback loops or for implementing neuroprosthetic control.

  1. Real-Time Position Reconstruction with Hippocampal Place Cells

    PubMed Central

    Guger, Christoph; Gener, Thomas; Pennartz, Cyriel M. A.; Brotons-Mas, Jorge R.; Edlinger, Günter; Bermúdez i Badia, S.; Verschure, Paul; Schaffelhofer, Stefan; Sanchez-Vives, Maria V.

    2011-01-01

    Brain–computer interfaces (BCI) are using the electroencephalogram, the electrocorticogram and trains of action potentials as inputs to analyze brain activity for communication purposes and/or the control of external devices. Thus far it is not known whether a BCI system can be developed that utilizes the states of brain structures that are situated well below the cortical surface, such as the hippocampus. In order to address this question we used the activity of hippocampal place cells (PCs) to predict the position of an rodent in real-time. First, spike activity was recorded from the hippocampus during foraging and analyzed off-line to optimize the spike sorting and position reconstruction algorithm of rats. Then the spike activity was recorded and analyzed in real-time. The rat was running in a box of 80 cm × 80 cm and its locomotor movement was captured with a video tracking system. Data were acquired to calculate the rat's trajectories and to identify place fields. Then a Bayesian classifier was trained to predict the position of the rat given its neural activity. This information was used in subsequent trials to predict the rat's position in real-time. The real-time experiments were successfully performed and yielded an error between 12.2 and 17.4% using 5–6 neurons. It must be noted here that the encoding step was done with data recorded before the real-time experiment and comparable accuracies between off-line (mean error of 15.9% for three rats) and real-time experiments (mean error of 14.7%) were achieved. The experiment shows proof of principle that position reconstruction can be done in real-time, that PCs were stable and spike sorting was robust enough to generalize from the training run to the real-time reconstruction phase of the experiment. Real-time reconstruction may be used for a variety of purposes, including creating behavioral–neuronal feedback loops or for implementing neuroprosthetic control. PMID:21808603

  2. Can GRACE Explain Some of the Main Interannual Polar Motion Signatures?

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Ivins, E. R.; Larour, E. Y.

    2016-12-01

    GRACE has provided a series of monthly solutions for water mass transport that now span a 14-year period. A natural question to ask is how much of this mass transport information might be used to reconstruct, theoretically, the non-tidal and non-Chandlerian polar motion at interannual time scales. Reconstruction of the pole position at interannual time scales since 2002 has been performed by Chen et al. (2013, GRL) and Adhikari and Ivins (2016, Science Advances). (The main feature of polar motion that has been evolving since the mid 1990's is the increasing dominance of Greenland ice mass loss.) Here we discuss this reconstruction and the level of error that occurs because of missing information about the spherical harmonic degree 1 and 2 terms and the lack of terms associated with angular momentum transfer in the Louiville equations. Using GRACE observations and complementary solutions of self-attraction/loading problem on an elastically compressible rotating earth, we show that ice mass losses from polar ice sheets, and when combined with changes in continental hydrology, explain nearly the entire amplitude (83±23%) and mean directional shift (within 5.9±7.6°) of recently observed eastward polar motion. We also show that decadal scale pole variations are directly linked to global changes in continental hydrology. The energy sources for such motions are likely to be associated with decadal scale ocean and atmospheric oscillations that also drive 20th century continental wet-dry variability. Interannual variability in pole position, therefore, offers a tool for assessing past stability of our climate, and for the future, now faced with an increased intensity in the water cycle and more vulnerable to ice sheet instability. Figure caption: Observed and reconstructed mean annual pole positions with respect to the 2003-2015 mean position. Blue error band is associated with the reconstructed solution; red signifies additional errors that are related to uncertainty in the long-term linear trend. Notice the interannual variability during the GRACE period.

  3. Assessing Seasonal and Inter-Annual Variations of Lake Surface Areas in Mongolia during 2000-2011 Using Minimum Composite MODIS NDVI

    PubMed Central

    Kang, Sinkyu; Hong, Suk Young

    2016-01-01

    A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km2. The lake area decreased by -9.3% at an annual rate of -53.7 km2 yr-1 during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability. PMID:27007233

  4. Assessing Seasonal and Inter-Annual Variations of Lake Surface Areas in Mongolia during 2000-2011 Using Minimum Composite MODIS NDVI.

    PubMed

    Kang, Sinkyu; Hong, Suk Young

    2016-01-01

    A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km(2). The lake area decreased by -9.3% at an annual rate of -53.7 km(2) yr(-1) during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability.

  5. Ring-widths of the above tree-line shrub Rhododendron reveal the change of minimum winter temperature over the past 211 years in Southwestern China

    NASA Astrophysics Data System (ADS)

    Bi, Yingfeng; Xu, Jianchu; Yang, Jinchao; Li, Zongshan; Gebrekirstos, Aster; Liang, Eryuan; Zhang, Shibao; Yang, Yang; Yang, Yongping; Yang, Xuefei

    2017-06-01

    Changes in minimum winter temperature (MWT) and their potential effects on plant growth and development have been gaining increased scientific attention. To better understand these changes across long temporal scales, the present study used dendroclimatological techniques to assess variations in MWT in Southwestern China. Using data from Rhododendron species distributed in areas above the tree-line, a regional composite chronology was generated for a 341-year period. Based on the significant negative correlation between MWT values and ring-width, the most reliable parts of this chronological data were then used to reconstruct MWT values for the past 211 years. This reconstructed MWT series showed decadal to multi-decadal fluctuations. Three distinct cold periods prevailed during 1823-1858, 1882-1891 and 1922-1965, while four warm intervals occurred in 1800-1822, 1858-1881, 1892-1921 and 1966-2011. Our reconstructed MWT reveals a warming trend over the most recent eight decades, which is in agreement with instrumental observations. However, the MWT values and rate of warming over the past seven decades did not exceed those found in the reconstructed temperature data for the past 211 years. Spatial correlations reveal that the MWT in Southwest China is strongly associated with regional temperatures in the Eastern and Central Himalaya, Northern China, and the Indian Peninsula. Larger scale climate oscillations of the Western Pacific and Northern Indian Ocean as well as the North Atlantic Oscillation probably influenced the region's temperature in the past.

  6. Static resistivity image of a cubic saline phantom in magnetic resonance electrical impedance tomography (MREIT).

    PubMed

    Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyeong; Kwon, Ohin; Seo, Jin Keun; Baek, Woon Sik

    2003-05-01

    In magnetic resonance electrical impedance tomography (MREIT) we inject currents through electrodes placed on the surface of a subject and try to reconstruct cross-sectional resistivity (or conductivity) images using internal magnetic flux density as well as boundary voltage measurements. In this paper we present a static resistivity image of a cubic saline phantom (50 x 50 x 50 mm3) containing a cylindrical sausage object with an average resistivity value of 123.7 ohms cm. Our current MREIT system is based on an experimental 0.3 T MRI scanner and a current injection apparatus. We captured MR phase images of the phantom while injecting currents of 28 mA through two pairs of surface electrodes. We computed current density images from magnetic flux density images that are proportional to the MR phase images. From the current density images and boundary voltage data we reconstructed a cross-sectional resistivity image within a central region of 38.5 x 38.5 mm2 at the middle of the phantom using the J-substitution algorithm. The spatial resolution of the reconstructed image was 64 x 64 and the reconstructed average resistivity of the sausage was 117.7 ohms cm. Even though the error in the reconstructed average resistivity value was small, the relative L2-error of the reconstructed image was 25.5% due to the noise in measured MR phase images. We expect improvements in the accuracy by utilizing an MRI scanner with higher SNR and increasing the size of voxels scarifying the spatial resolution.

  7. Galaxy Bias and its Effects on the Baryon Acoustic Oscillations Measurements

    NASA Astrophysics Data System (ADS)

    Mehta, Kushal; Seo, H.; Eckel, J.; Eisenstein, D.; Metchnik, M.; Pinto, P.; Xu, X.

    2011-05-01

    The baryon acoustic oscillation (BAO) feature in the clustering of matter in the universe serves as a robust standard ruler and hence can be used to map the expansion history of the universe. We use high force resolution simulations to analyze the effects of galaxy bias on the measurements of the BAO signal. We apply a variety of Halo Occupation Distributions (HODs) and produce biased mass tracers to mimic different galaxy populations. We investigate whether galaxy bias changes the non-linear shifts on the acoustic scale relative to the underlying dark matter distribution presented by Seo et al (2010). For the less biased HOD models (b < 3), we do not detect any shift in the acoustic scale relative to the no-bias case, typically 0.10% ± 0.10%. However, the most biased HOD models (b > 3) show a shift at moderate significance (0.79% ± 0.31% for the most extreme case). We test the one-step reconstruction technique introduced by Eisenstein et al. (2007) in the case of realistic galaxy bias and shot noise. The reconstruction scheme increases the correlation between the initial and final (z = 1) density fields achieving an equivalent level of correlation at nearly twice the wavenumber after reconstruction. Reconstruction reduces the shifts and errors on the shifts. We find that after reconstruction the shifts from the galaxy cases and the dark matter case are consistent with each other and with no shift. The 1σ systematic errors on the distance measurements inferred from our BAO measurements with various HODs after reconstruction are about 0.07% - 0.15%.

  8. Galaxy Bias and Its Effects on the Baryon Acoustic Oscillation Measurements

    NASA Astrophysics Data System (ADS)

    Mehta, Kushal T.; Seo, Hee-Jong; Eckel, Jonathan; Eisenstein, Daniel J.; Metchnik, Marc; Pinto, Philip; Xu, Xiaoying

    2011-06-01

    The baryon acoustic oscillation (BAO) feature in the clustering of matter in the universe serves as a robust standard ruler and hence can be used to map the expansion history of the universe. We use high force resolution simulations to analyze the effects of galaxy bias on the measurements of the BAO signal. We apply a variety of Halo Occupation Distributions (HODs) and produce biased mass tracers to mimic different galaxy populations. We investigate whether galaxy bias changes the nonlinear shifts on the acoustic scale relative to the underlying dark matter distribution presented by Seo et al. For the less biased HOD models (b < 3), we do not detect any shift in the acoustic scale relative to the no-bias case, typically 0.10% ± 0.10%. However, the most biased HOD models (b > 3) show a shift at moderate significance (0.79% ± 0.31% for the most extreme case). We test the one-step reconstruction technique introduced by Eisenstein et al. in the case of realistic galaxy bias and shot noise. The reconstruction scheme increases the correlation between the initial and final (z = 1) density fields, achieving an equivalent level of correlation at nearly twice the wavenumber after reconstruction. Reconstruction reduces the shifts and errors on the shifts. We find that after reconstruction the shifts from the galaxy cases and the dark matter case are consistent with each other and with no shift. The 1σ systematic errors on the distance measurements inferred from our BAO measurements with various HODs after reconstruction are about 0.07%-0.15%.

  9. Improving Visibility of Stereo-Radiographic Spine Reconstruction with Geometric Inferences.

    PubMed

    Kumar, Sampath; Nayak, K Prabhakar; Hareesha, K S

    2016-04-01

    Complex deformities of the spine, like scoliosis, are evaluated more precisely using stereo-radiographic 3D reconstruction techniques. Primarily, it uses six stereo-corresponding points available on the vertebral body for the 3D reconstruction of each vertebra. The wireframe structure obtained in this process has poor visualization, hence difficult to diagnose. In this paper, a novel method is proposed to improve the visibility of this wireframe structure using a deformation of a generic spine model in accordance with the 3D-reconstructed corresponding points. Then, the geometric inferences like vertebral orientations are automatically extracted from the radiographs to improve the visibility of the 3D model. Biplanar radiographs are acquired from five scoliotic subjects on a specifically designed calibration bench. The stereo-corresponding point reconstruction method is used to build six-point wireframe vertebral structures and thus the entire spine model. Using the 3D spine midline and automatically extracted vertebral orientation features, a more realistic 3D spine model is generated. To validate the method, the 3D spine model is back-projected on biplanar radiographs and the error difference is computed. Though, this difference is within the error limits available in the literature, the proposed work is simple and economical. The proposed method does not require more corresponding points and image features to improve the visibility of the model. Hence, it reduces the computational complexity. Expensive 3D digitizer and vertebral CT scan models are also excluded from this study. Thus, the visibility of stereo-corresponding point reconstruction is improved to obtain a low-cost spine model for a better diagnosis of spinal deformities.

  10. Dynamic 2D self-phase-map Nyquist ghost correction for simultaneous multi-slice echo planar imaging.

    PubMed

    Yarach, Uten; Tung, Yi-Hang; Setsompop, Kawin; In, Myung-Ho; Chatnuntawech, Itthi; Yakupov, Renat; Godenschweger, Frank; Speck, Oliver

    2018-02-09

    To develop a reconstruction pipeline that intrinsically accounts for both simultaneous multislice echo planar imaging (SMS-EPI) reconstruction and dynamic slice-specific Nyquist ghosting correction in time-series data. After 1D slice-group average phase correction, the separate polarity (i.e., even and odd echoes) SMS-EPI data were unaliased by slice GeneRalized Autocalibrating Partial Parallel Acquisition. Both the slice-unaliased even and odd echoes were jointly reconstructed using a model-based framework, extended for SMS-EPI reconstruction that estimates a 2D self-phase map, corrects dynamic slice-specific phase errors, and combines data from all coils and echoes to obtain the final images. The percentage ghost-to-signal ratios (%GSRs) and its temporal variations for MB3R y 2 with a field of view/4 shift in a human brain obtained by the proposed dynamic 2D and standard 1D phase corrections were 1.37 ± 0.11 and 2.66 ± 0.16, respectively. Even with a large regularization parameter λ applied in the proposed reconstruction, the smoothing effect in fMRI activation maps was comparable to a very small Gaussian kernel size 1 × 1 × 1 mm 3 . The proposed reconstruction pipeline reduced slice-specific phase errors in SMS-EPI, resulting in reduction of GSR. It is applicable for functional MRI studies because the smoothing effect caused by the regularization parameter selection can be minimal in a blood-oxygen-level-dependent activation map. © 2018 International Society for Magnetic Resonance in Medicine.

  11. Tracking and shape errors measurement of concentrating heliostats

    NASA Astrophysics Data System (ADS)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-09-01

    In solar tower power plants, factors such as tracking accuracy, facets misalignment and surface shape errors of concentrating heliostats are of prime importance on the efficiency of the system. At industrial scale, one critical issue is the time and effort required to adjust the different mirrors of the faceted heliostats, which could take several months using current techniques. Thus, methods enabling quick adjustment of a field with a huge number of heliostats are essential for the rise of solar tower technology. In this communication is described a new method for heliostat characterization that makes use of four cameras located near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. From knowledge of a measured sun profile, data processing of the acquired images allows reconstructing the slope and shape errors of the heliostats, including tracking and canting errors. The mathematical basis of this shape reconstruction process is explained comprehensively. Numerical simulations demonstrate that the measurement accuracy of this "backward-gazing method" is compliant with the requirements of solar concentrating optics. Finally, we present our first experimental results obtained at the THEMIS experimental solar tower plant in Targasonne, France.

  12. Quantitative reconstruction of refractive index distribution and imaging of glucose concentration by using diffusing light.

    PubMed

    Liang, Xiaoping; Zhang, Qizhi; Jiang, Huabei

    2006-11-10

    We show that a two-step reconstruction method can be adapted to improve the quantitative accuracy of the refractive index reconstruction in phase-contrast diffuse optical tomography (PCDOT). We also describe the possibility of imaging tissue glucose concentration with PCDOT. In this two-step method, we first use our existing finite-element reconstruction algorithm to recover the position and shape of a target. We then use the position and size of the target as a priori information to reconstruct a single value of the refractive index within the target and background regions using a region reconstruction method. Due to the extremely low contrast available in the refractive index reconstruction, we incorporate a data normalization scheme into the two-step reconstruction to combat the associated low signal-to-noise ratio. Through a series of phantom experiments we find that this two-step reconstruction method can considerably improve the quantitative accuracy of the refractive index reconstruction. The results show that the relative error of the reconstructed refractive index is reduced from 20% to within 1.5%. We also demonstrate the possibility of PCDOT for recovering glucose concentration using these phantom experiments.

  13. A Reassessment of the Precision of Carbonate Clumped Isotope Measurements: Implications for Calibrations and Paleoclimate Reconstructions

    NASA Astrophysics Data System (ADS)

    Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.

    2017-12-01

    Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.

  14. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  15. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  16. Estimation of the Cesium-137 Source Term from the Fukushima Daiichi Power Plant Using Air Concentration and Deposition Data

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2013-04-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.

  17. SU-E-T-202: Comparison of 4D-Measurement-Guided Dose Reconstructions (MGDR) with COMPASS and OCTAVIUS 4D System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung, R; Wong, M; Lee, V

    2015-06-15

    Purpose: To cross-validate the MGDR of COMPASS (IBA dosimetry, GmbH, Germany) and OCTAVIUS 4D system (PTW, Freiburg, Germany). Methods: Volumetric-modulated arc plans (5 head-and-neck and 3 prostate) collapsed to 40° gantry on the OCTAVIUS 4D phantom in QA mode on Monaco v5.0 (Elekta, CMS, Maryland Heights, MO) were delivered on a Elekta Agility linac. This study was divided into two parts: (1) error-free measurements by gantry-mounted EvolutionXX 2D array were reconstructed in COMPASS (IBA dosimetry, GmbH, Germany), and by OCTAVIUS 1500 array in Versoft v6.1 (PTW, Freiburg, Germany) to obtain the 3D doses (COM4D and OCTA4D). COM4D and OCTA4D weremore » compared to the raw measurement (OCTA3D) at the same detector plane for which OCTAVIUS 1500 was perpendicular to 0° gantry axis while the plans were delivered at gantry 40°; (2) beam steering errors of energy (Hump=-2%) and symmetry (2T=+2%) were introduced during the delivery of 5 plans to compare the MGDR doses COM4D-Hump (COM4D-2T), OCTA4D-Hump (OCTA4D-2T), with raw doses OCTA3D-Hump (OCTA3D-2T) and with OCTA3D to assess the error reconstruction and detection ability of MGDR tools. All comparisons used Υ-criteria of 2%(local dose)/2mm and 3%/3mm. Results: Averaged Υ passing rates were 85% and 96% for COM4D,and 94% and 99% for OCTA4D at 2%/2mm and 3%/3mm criteria respectively. For error reconstruction, COM4D-Hump (COM4D-2T) showed 81% (93%) at 2%/2mm and 94% (98%) at 3%/3mm, while OCTA4D-Hump (OCTA4D-2T) showed 96% (96%) at 2%/2mm and 99% (99%) at 3%/3mm. For error detection, OCTA3D doses were compared to COM4D-Hump (COM4D-2T) showing Υ passing rates of 93% (93%) at 2%/2mm and 98% (98%), and to OCTA4D-Hump (OCTA4D -2T) showing 94% (99%) at 2%/2mm and 81% (96%) at 3%/3mm, respectively. Conclusion: OCTAVIUS MGDR showed better agreement to raw measurements in both error- and error-free comparisons. COMPASS MGDR deviated from the raw measurements possibly owing to beam modeling uncertainty.« less

  18. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  19. Effects of refractive index mismatch in optical CT imaging of polymer gel dosimeters.

    PubMed

    Manjappa, Rakesh; Makki S, Sharath; Kumar, Rajesh; Kanhirodan, Rajan

    2015-02-01

    Proposing an image reconstruction technique, algebraic reconstruction technique-refraction correction (ART-rc). The proposed method takes care of refractive index mismatches present in gel dosimeter scanner at the boundary, and also corrects for the interior ray refraction. Polymer gel dosimeters with high dose regions have higher refractive index and optical density compared to the background medium, these changes in refractive index at high dose results in interior ray bending. The inclusion of the effects of refraction is an important step in reconstruction of optical density in gel dosimeters. The proposed ray tracing algorithm models the interior multiple refraction at the inhomogeneities. Jacob's ray tracing algorithm has been modified to calculate the pathlengths of the ray that traverses through the higher dose regions. The algorithm computes the length of the ray in each pixel along its path and is used as the weight matrix. Algebraic reconstruction technique and pixel based reconstruction algorithms are used for solving the reconstruction problem. The proposed method is tested with numerical phantoms for various noise levels. The experimental dosimetric results are also presented. The results show that the proposed scheme ART-rc is able to reconstruct optical density inside the dosimeter better than the results obtained using filtered backprojection and conventional algebraic reconstruction approaches. The quantitative improvement using ART-rc is evaluated using gamma-index. The refraction errors due to regions of different refractive indices are discussed. The effects of modeling of interior refraction in the dose region are presented. The errors propagated due to multiple refraction effects have been modeled and the improvements in reconstruction using proposed model is presented. The refractive index of the dosimeter has a mismatch with the surrounding medium (for dry air or water scanning). The algorithm reconstructs the dose profiles by estimating refractive indices of multiple inhomogeneities having different refractive indices and optical densities embedded in the dosimeter. This is achieved by tracking the path of the ray that traverses through the dosimeter. Extensive simulation studies have been carried out and results are found to be matching that of experimental results.

  20. Low dose CBCT reconstruction via prior contour based total variation (PCTV) regularization: a feasibility study

    NASA Astrophysics Data System (ADS)

    Chen, Yingxuan; Yin, Fang-Fang; Zhang, Yawei; Zhang, You; Ren, Lei

    2018-04-01

    Purpose: compressed sensing reconstruction using total variation (TV) tends to over-smooth the edge information by uniformly penalizing the image gradient. The goal of this study is to develop a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT. Methods: the edge information is extracted from prior planning-CT via edge detection. Prior CT is first registered with on-board CBCT reconstructed with TV method through rigid or deformable registration. The edge contours in prior-CT is then mapped to CBCT and used as the weight map for TV regularization to enhance edge information in CBCT reconstruction. The PCTV method was evaluated using extended-cardiac-torso (XCAT) phantom, physical CatPhan phantom and brain patient data. Results were compared with both TV and edge preserving TV (EPTV) methods which are commonly used for limited projection CBCT reconstruction. Relative error was used to calculate pixel value difference and edge cross correlation was defined as the similarity of edge information between reconstructed images and ground truth in the quantitative evaluation. Results: compared to TV and EPTV, PCTV enhanced the edge information of bone, lung vessels and tumor in XCAT reconstruction and complex bony structures in brain patient CBCT. In XCAT study using 45 half-fan CBCT projections, compared with ground truth, relative errors were 1.5%, 0.7% and 0.3% and edge cross correlations were 0.66, 0.72 and 0.78 for TV, EPTV and PCTV, respectively. PCTV is more robust to the projection number reduction. Edge enhancement was reduced slightly with noisy projections but PCTV was still superior to other methods. PCTV can maintain resolution while reducing the noise in the low mAs CatPhan reconstruction. Low contrast edges were preserved better with PCTV compared with TV and EPTV. Conclusion: PCTV preserved edge information as well as reduced streak artifacts and noise in low dose CBCT reconstruction. PCTV is superior to TV and EPTV methods in edge enhancement, which can potentially improve the localization accuracy in radiation therapy.

  1. Effects of refractive index mismatch in optical CT imaging of polymer gel dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manjappa, Rakesh; Makki S, Sharath; Kanhirodan, Rajan, E-mail: rajan@physics.iisc.ernet.in

    2015-02-15

    Purpose: Proposing an image reconstruction technique, algebraic reconstruction technique-refraction correction (ART-rc). The proposed method takes care of refractive index mismatches present in gel dosimeter scanner at the boundary, and also corrects for the interior ray refraction. Polymer gel dosimeters with high dose regions have higher refractive index and optical density compared to the background medium, these changes in refractive index at high dose results in interior ray bending. Methods: The inclusion of the effects of refraction is an important step in reconstruction of optical density in gel dosimeters. The proposed ray tracing algorithm models the interior multiple refraction at themore » inhomogeneities. Jacob’s ray tracing algorithm has been modified to calculate the pathlengths of the ray that traverses through the higher dose regions. The algorithm computes the length of the ray in each pixel along its path and is used as the weight matrix. Algebraic reconstruction technique and pixel based reconstruction algorithms are used for solving the reconstruction problem. The proposed method is tested with numerical phantoms for various noise levels. The experimental dosimetric results are also presented. Results: The results show that the proposed scheme ART-rc is able to reconstruct optical density inside the dosimeter better than the results obtained using filtered backprojection and conventional algebraic reconstruction approaches. The quantitative improvement using ART-rc is evaluated using gamma-index. The refraction errors due to regions of different refractive indices are discussed. The effects of modeling of interior refraction in the dose region are presented. Conclusions: The errors propagated due to multiple refraction effects have been modeled and the improvements in reconstruction using proposed model is presented. The refractive index of the dosimeter has a mismatch with the surrounding medium (for dry air or water scanning). The algorithm reconstructs the dose profiles by estimating refractive indices of multiple inhomogeneities having different refractive indices and optical densities embedded in the dosimeter. This is achieved by tracking the path of the ray that traverses through the dosimeter. Extensive simulation studies have been carried out and results are found to be matching that of experimental results.« less

  2. A software tool of digital tomosynthesis application for patient positioning in radiotherapy.

    PubMed

    Yan, Hui; Dai, Jian-Rong

    2016-03-08

    Digital Tomosynthesis (DTS) is an image modality in reconstructing tomographic images from two-dimensional kV projections covering a narrow scan angles. Comparing with conventional cone-beam CT (CBCT), it requires less time and radiation dose in data acquisition. It is feasible to apply this technique in patient positioning in radiotherapy. To facilitate its clinical application, a software tool was developed and the reconstruction processes were accelerated by graphic process-ing unit (GPU). Two reconstruction and two registration processes are required for DTS application which is different from conventional CBCT application which requires one image reconstruction process and one image registration process. The reconstruction stage consists of productions of two types of DTS. One type of DTS is reconstructed from cone-beam (CB) projections covering a narrow scan angle and is named onboard DTS (ODTS), which represents the real patient position in treatment room. Another type of DTS is reconstructed from digitally reconstructed radiography (DRR) and is named reference DTS (RDTS), which represents the ideal patient position in treatment room. Prior to the reconstruction of RDTS, The DRRs are reconstructed from planning CT using the same acquisition setting of CB projections. The registration stage consists of two matching processes between ODTS and RDTS. The target shift in lateral and longitudinal axes are obtained from the matching between ODTS and RDTS in coronal view, while the target shift in longitudinal and vertical axes are obtained from the matching between ODTS and RDTS in sagittal view. In this software, both DRR and DTS reconstruction algorithms were implemented on GPU environments for acceleration purpose. The comprehensive evaluation of this software tool was performed including geometric accuracy, image quality, registration accuracy, and reconstruction efficiency. The average correlation coefficient between DRR/DTS generated by GPU-based algorithm and CPU-based algorithm is 0.99. Based on the measurements of cube phantom on DTS, the geometric errors are within 0.5 mm in three axes. For both cube phantom and pelvic phantom, the registration errors are within 0.5 mm in three axes. Compared with reconstruction performance of CPU-based algorithms, the performances of DRR and DTS reconstructions are improved by a factor of 15 to 20. A GPU-based software tool was developed for DTS application for patient positioning of radiotherapy. The geometric and registration accuracy met the clinical requirement in patient setup of radiotherapy. The high performance of DRR and DTS reconstruction algorithms was achieved by the GPU-based computation environments. It is a useful software tool for researcher and clinician in evaluating DTS application in patient positioning of radiotherapy.

  3. Image Reconstruction for Interferometric Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    DeSantis, Zachary J.

    Imaging distant objects at a high resolution has always presented a challenge due to the diffraction limit. Larger apertures improve the resolution, but at some point the cost of engineering, building, and correcting phase aberrations of large apertures become prohibitive. Interferometric imaging uses the Van Cittert-Zernike theorem to form an image from measurements of spatial coherence. This effectively allows the synthesis of a large aperture from two or more smaller telescopes to improve the resolution. We apply this method to imaging geosynchronous satellites with a ground-based system. Imaging a dim object from the ground presents unique challenges. The atmosphere creates errors in the phase measurements. The measurements are taken simultaneously across a large bandwidth of light. The atmospheric piston error, therefore, manifests as a linear phase error across the spectral measurements. Because the objects are faint, many of the measurements are expected to have a poor signal-to-noise ratio (SNR). This eliminates possibility of use of commonly used techniques like closure phase, which is a standard technique in astronomical interferometric imaging for making partial phase measurements in the presence of atmospheric error. The bulk of our work has been focused on forming an image, using sub-Nyquist sampled data, in the presence of these linear phase errors without relying on closure phase techniques. We present an image reconstruction algorithm that successfully forms an image in the presence of these linear phase errors. We demonstrate our algorithm’s success in both simulation and in laboratory experiments.

  4. Land surveys show regional variability of historical fire regimes and dry forest structure of the western United States.

    PubMed

    Baker, William L; Williams, Mark A

    2018-03-01

    An understanding of how historical fire and structure in dry forests (ponderosa pine, dry mixed conifer) varied across the western United States remains incomplete. Yet, fire strongly affects ecosystem services, and forest restoration programs are underway. We used General Land Office survey reconstructions from the late 1800s across 11 landscapes covering ~1.9 million ha in four states to analyze spatial variation in fire regimes and forest structure. We first synthesized the state of validation of our methods using 20 modern validations, 53 historical cross-validations, and corroborating evidence. These show our method creates accurate reconstructions with low errors. One independent modern test reported high error, but did not replicate our method and made many calculation errors. Using reconstructed parameters of historical fire regimes and forest structure from our validated methods, forests were found to be non-uniform across the 11 landscapes, but grouped together in three geographical areas. Each had a mixture of fire severities, but dominated by low-severity fire and low median tree density in Arizona, mixed-severity fire and intermediate to high median tree density in Oregon-California, and high-severity fire and intermediate median tree density in Colorado. Programs to restore fire and forest structure could benefit from regional frameworks, rather than one size fits all. © 2018 by the Ecological Society of America.

  5. 1996-2007 Interannual Spatio-Temporal Variability in Snowmelt in Two Montane Watersheds

    NASA Astrophysics Data System (ADS)

    Jepsen, S. M.; Molotch, N. P.; Rittger, K. E.

    2009-12-01

    Snowmelt is a primary water source for ecosystems within, and urban/agricultural centers near, mountain regions. Stream chemistry from montane catchments is controlled by the flowpaths of water from snowmelt and the timing and duration of snow coverage. A process level understanding of the variability in these processes requires an understanding of the effect of changing climate and anthropogenic loading on spatio-temporal snowmelt patterns. With this as our objective, we are applying a snow reconstruction model to two well-studied montane watersheds, Tokopah Basin (TOK), California and Green Lakes Valley (GLV), Colorado, to examine interannual variability in the timing and location of snowmelt in response to variable climate conditions during the period from 1996 to 2007. The reconstruction model back solves for snowmelt by combining surface energy fluxes, inferred from meteorological data, with sequences of melt season snow images derived from satellite data (i.e., snowmelt depletion curves). Preliminary model results for 2002 were tested against measured snow water equivalent (SWE) and hydrograph data for the two watersheds. The computed maximum SWE averaged over TOK and GLV were 94 cm (~+17% error) and 50.2 cm (~+1% error), respectively. We present an analysis of interannual variability in these errors, in addition to reconstructed snowmelt maps over different land cover types under changing climate conditions between 1996-2007, focusing on the variability with interannual variation in climate.

  6. Conditions that influence the accuracy of anthropometric parameter estimation for human body segments using shape-from-silhouette

    NASA Astrophysics Data System (ADS)

    Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.

    2005-01-01

    Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).

  7. Forensic Facial Reconstruction: The Final Frontier.

    PubMed

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  8. Maximum entropy reconstruction of poloidal magnetic field and radial electric field profiles in tokamaks

    NASA Astrophysics Data System (ADS)

    Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang

    2017-10-01

    The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.

  9. Joint Dictionary Learning for Multispectral Change Detection.

    PubMed

    Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao

    2017-04-01

    Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.

  10. Probablilistic evaluation of earthquake detection and location capability for Illinois, Indiana, Kentucky, Ohio, and West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauk, F.J.; Christensen, D.H.

    1980-09-01

    Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less

  11. Little Boy replication: justification and construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malenfant, R.E.

    A reconstruction of the Little Boy weapon allowed experiments to evaluate yield, leakage measurements for comparison with calculations, and phenomenological measurements to evaluate various in-situ dosimeters. The reconstructed weapon was operated at sustained delayed critical at the Los Alamos Critical Assembly Facility. The present experiments provide a wealth of information to benchmark calculations and demonstrate that the 1965 measurements on the Ichiban assembly (a spherical mockup of Little Boy) were in error.

  12. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  13. Fourier transform profilometry (FTP) using an innovative band-pass filter for accurate 3-D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Ho, Hsuan-Wei; Nguyen, Xuan-Loc

    2010-02-01

    This article presents a novel band-pass filter for Fourier transform profilometry (FTP) for accurate 3-D surface reconstruction. FTP can be employed to obtain 3-D surface profiles by one-shot images to achieve high-speed measurement. However, its measurement accuracy has been significantly influenced by the spectrum filtering process required to extract the phase information representing various surface heights. Using the commonly applied 2-D Hanning filter, the measurement errors could be up to 5-10% of the overall measuring height and it is unacceptable to various industrial application. To resolve this issue, the article proposes an elliptical band-pass filter for extracting the spectral region possessing essential phase information for reconstructing accurate 3-D surface profiles. The elliptical band-pass filter was developed and optimized to reconstruct 3-D surface models with improved measurement accuracy. Some experimental results verify that the accuracy can be effectively enhanced by using the elliptical filter. The accuracy improvement of 44.1% and 30.4% can be achieved in 3-D and sphericity measurement, respectively, when the elliptical filter replaces the traditional filter as the band-pass filtering method. Employing the developed method, the maximum measured error can be kept within 3.3% of the overall measuring range.

  14. Stereovision-based integrated system for point cloud reconstruction and simulated brain shift validation.

    PubMed

    Yang, Xiaochen; Clements, Logan W; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C; Dawant, Benoit M; Miga, Michael I

    2017-07-01

    Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.

  15. Structure Calculation and Reconstruction of Discrete-State Dynamics from Residual Dipolar Couplings.

    PubMed

    Cole, Casey A; Mukhopadhyay, Rishi; Omar, Hanin; Hennig, Mirko; Valafar, Homayoun

    2016-04-12

    Residual dipolar couplings (RDCs) acquired by nuclear magnetic resonance (NMR) spectroscopy are an indispensable source of information in investigation of molecular structures and dynamics. Here, we present a comprehensive strategy for structure calculation and reconstruction of discrete-state dynamics from RDC data that is based on the singular value decomposition (SVD) method of order tensor estimation. In addition to structure determination, we provide a mechanism of producing an ensemble of conformations for the dynamical regions of a protein from RDC data. The developed methodology has been tested on simulated RDC data with ±1 Hz of error from an 83 residue α protein (PDB ID 1A1Z ) and a 213 residue α/β protein DGCR8 (PDB ID 2YT4 ). In nearly all instances, our method reproduced the structure of the protein including the conformational ensemble to within less than 2 Å. On the basis of our investigations, arc motions with more than 30° of rotation are identified as internal dynamics and are reconstructed with sufficient accuracy. Furthermore, states with relative occupancies above 20% are consistently recognized and reconstructed successfully. Arc motions with a magnitude of 15° or relative occupancy of less than 10% are consistently unrecognizable as dynamical regions within the context of ±1 Hz of error.

  16. A model-based 3D patient-specific pre-treatment QA method for VMAT using the EPID

    NASA Astrophysics Data System (ADS)

    McCowan, P. M.; Asuni, G.; van Beek, T.; van Uytven, E.; Kujanpaa, K.; McCurdy, B. M. C.

    2017-02-01

    This study reports the development and validation of a model-based, 3D patient dose reconstruction method for pre-treatment quality assurance using EPID images. The method is also investigated for sensitivity to potential MLC delivery errors. Each cine-mode EPID image acquired during plan delivery was processed using a previously developed back-projection dose reconstruction model providing a 3D dose estimate on the CT simulation data. Validation was carried out using 24 SBRT-VMAT patient plans by comparing: (1) ion chamber point dose measurements in a solid water phantom, (2) the treatment planning system (TPS) predicted 3D dose to the EPID reconstructed 3D dose in a solid water phantom, and (3) the TPS predicted 3D dose to the EPID and our forward predicted reconstructed 3D dose in the patient (CT data). AAA and AcurosXB were used for TPS predictions. Dose distributions were compared using 3%/3 mm (95% tolerance) and 2%/2 mm (90% tolerance) γ-tests in the planning target volume (PTV) and 20% dose volumes. The average percentage point dose differences between the ion chamber and the EPID, AcurosXB, and AAA were 0.73  ±  1.25%, 0.38  ±  0.96% and 1.06  ±  1.34% respectively. For the patient (CT) dose comparisons, seven (3%/3 mm) and nine (2%/2 mm) plans failed the EPID versus AAA. All plans passed the EPID versus Acuros XB and the EPID versus forward model γ-comparisons. Four types of MLC sensitive errors (opening, shifting, stuck, and retracting), of varying magnitude (0.2, 0.5, 1.0, 2.0 mm), were introduced into six different SBRT-VMAT plans. γ-comparisons of the erroneous EPID dose and original predicted dose were carried out using the same criteria as above. For all plans, the sensitivity testing using a 3%/3 mm γ-test in the PTV successfully determined MLC errors on the order of 1.0 mm, except for the single leaf retraction-type error. A 2%/2 mm criteria produced similar results with two more additional detected errors.

  17. Application of a Laplace transform pair model for high-energy x-ray spectral reconstruction.

    PubMed

    Archer, B R; Almond, P R; Wagner, L K

    1985-01-01

    A Laplace transform pair model, previously shown to accurately reconstruct x-ray spectra at diagnostic energies, has been applied to megavoltage energy beams. The inverse Laplace transforms of 2-, 6-, and 25-MV attenuation curves were evaluated to determine the energy spectra of these beams. The 2-MV data indicate that the model can reliably reconstruct spectra in the low megavoltage range. Experimental limitations in acquiring the 6-MV transmission data demonstrate the sensitivity of the model to systematic experimental error. The 25-MV data result in a physically realistic approximation of the present spectrum.

  18. Does press-fit technique reduce tunnel volume enlargement after anterior cruciate ligament reconstruction with autologous hamstring tendons? A prospective randomized computed tomography study.

    PubMed

    Hwang, Dae-Hee; Shetty, Gautam M; Kim, Jong In; Kwon, Jae Ho; Song, Jae-Kwang; Muñoz, Michael; Lee, Jun Seop; Nha, Kyung-Wook

    2013-01-01

    The purpose of this prospective, randomized, computed tomography-based study was to investigate whether the press-fit technique reduces tunnel volume enlargement (TVE) and improves the clinical outcome after anterior cruciate ligament reconstruction at a minimum follow-up of 1 year compared with conventional technique. Sixty-nine patients undergoing primary ACL reconstruction using hamstring autografts were randomly allocated to either the press-fit technique group (group A) or conventional technique group (group B). All patients were evaluated for TVE and tunnel widening using computed tomography scanning, for functional outcome using International Knee Documentation Committee and Lysholm scores, for rotational stability using the pivot-shift test, and for anterior laxity using the KT-2000 arthrometer at a minimum of 1-year follow-up. There were no significant differences in TVE between the 2 groups. In group A, in which the press-fit technique was used, mean volume enlargement in the femoral tunnel was 65% compared with 71.5% in group B (P = .84). In group A, 57% (20 of 35) of patients developed femoral TVE compared with 67% (23 of 34) of patients in group B (P = .27). Both groups showed no significant difference for functional outcome (mean Lysholm score P = .73, International Knee Documentation Committee score P = .15), or knee laxity (anterior P = .78, rotational P = .22) at a minimum follow-up of 1 year. In a comparison of press-fit and conventional techniques, there were no significant differences in TVE and clinical outcome at short-term follow-up. Level II, therapeutic study, prospective randomized clinical trial. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  19. Orbital Floor Reconstruction with Free Flaps after Maxillectomy

    PubMed Central

    Sampathirao, Leela Mohan C. S. R.; Thankappan, Krishnakumar; Duraisamy, Sriprakash; Hedne, Naveen; Sharma, Mohit; Mathew, Jimmy; Iyer, Subramania

    2013-01-01

    Background The purpose of this study is to evaluate the outcome of orbital floor reconstruction with free flaps after maxillectomy. Methods This was a retrospective analysis of 34 consecutive patients who underwent maxillectomy with orbital floor removal for malignancies, reconstructed with free flaps. A cross-sectional survey to assess the functional and esthetic outcome was done in 28 patients who were alive and disease-free, with a minimum of 6 months of follow-up. Results Twenty-six patients had bony reconstruction, and eight had soft tissue reconstruction. Free fibula flap was the commonest flap used (n = 14). Visual acuity was normal in 86%. Eye movements were normal in 92%. Abnormal globe position resulted in nine patients. Esthetic satisfaction was good in 19 patients (68%). Though there was no statistically significant difference in outcome of visual acuity, eye movement, and patient esthetic satisfaction between patients with bony and soft tissue reconstruction, more patients without bony reconstruction had abnormal globe position (p = 0.040). Conclusion Free tissue transfer has improved the results of orbital floor reconstruction after total maxillectomy, preserving the eye. Good functional and esthetic outcome was achieved. Though our study favors a bony orbital reconstruction, a larger study with adequate power and equal distribution of patients among the groups would be needed to determine this. Free fibula flap remains the commonest choice when a bony reconstruction is contemplated. PMID:24436744

  20. Orbital floor reconstruction with free flaps after maxillectomy.

    PubMed

    Sampathirao, Leela Mohan C S R; Thankappan, Krishnakumar; Duraisamy, Sriprakash; Hedne, Naveen; Sharma, Mohit; Mathew, Jimmy; Iyer, Subramania

    2013-06-01

    Background The purpose of this study is to evaluate the outcome of orbital floor reconstruction with free flaps after maxillectomy. Methods This was a retrospective analysis of 34 consecutive patients who underwent maxillectomy with orbital floor removal for malignancies, reconstructed with free flaps. A cross-sectional survey to assess the functional and esthetic outcome was done in 28 patients who were alive and disease-free, with a minimum of 6 months of follow-up. Results Twenty-six patients had bony reconstruction, and eight had soft tissue reconstruction. Free fibula flap was the commonest flap used (n = 14). Visual acuity was normal in 86%. Eye movements were normal in 92%. Abnormal globe position resulted in nine patients. Esthetic satisfaction was good in 19 patients (68%). Though there was no statistically significant difference in outcome of visual acuity, eye movement, and patient esthetic satisfaction between patients with bony and soft tissue reconstruction, more patients without bony reconstruction had abnormal globe position (p = 0.040). Conclusion Free tissue transfer has improved the results of orbital floor reconstruction after total maxillectomy, preserving the eye. Good functional and esthetic outcome was achieved. Though our study favors a bony orbital reconstruction, a larger study with adequate power and equal distribution of patients among the groups would be needed to determine this. Free fibula flap remains the commonest choice when a bony reconstruction is contemplated.

  1. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  2. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  3. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  4. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  5. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  6. A new enhanced index tracking model in portfolio optimization with sum weighted approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng

    2017-04-01

    Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.

  7. The Power of the Spectrum: Combining Numerical Proxy System Models with Analytical Error Spectra to Better Understand Timescale Dependent Proxy Uncertainty

    NASA Astrophysics Data System (ADS)

    Dolman, A. M.; Laepple, T.; Kunz, T.

    2017-12-01

    Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.

  8. Reconstruction of the mass distribution of galaxy clusters from the inversion of the thermal Sunyaev-Zel'dovich effect

    NASA Astrophysics Data System (ADS)

    Majer, C. L.; Meyer, S.; Konrad, S.; Sarli, E.; Bartelmann, M.

    2016-07-01

    This paper continues a series in which we intend to show how all observables of galaxy clusters can be combined to recover the two-dimensional, projected gravitational potential of individual clusters. Our goal is to develop a non-parametric algorithm for joint cluster reconstruction taking all cluster observables into account. For this reason we focus on the line-of-sight projected gravitational potential, proportional to the lensing potential, in order to extend existing reconstruction algorithms. In this paper, we begin with the relation between the Compton-y parameter and the Newtonian gravitational potential, assuming hydrostatic equilibrium and a polytropic stratification of the intracluster gas. Extending our first publication we now consider a spheroidal rather than a spherical cluster symmetry. We show how a Richardson-Lucy deconvolution can be used to convert the intensity change of the CMB due to the thermal Sunyaev-Zel'dovich effect into an estimate for the two-dimensional gravitational potential. We apply our reconstruction method to a cluster based on an N-body/hydrodynamical simulation processed with the characteristics (resolution and noise) of the ALMA interferometer for which we achieve a relative error of ≲20 per cent for a large fraction of the virial radius. We further apply our method to an observation of the galaxy cluster RXJ1347 for which we can reconstruct the potential with a relative error of ≲20 per cent for the observable cluster range.

  9. Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.

    PubMed

    Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin

    2018-03-01

    The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  11. Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications

    DTIC Science & Technology

    2005-04-01

    coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.

  12. Color image generation for screen-scanning holographic display.

    PubMed

    Takaki, Yasuhiro; Matsumoto, Yuji; Nakajima, Tatsumi

    2015-10-19

    Horizontally scanning holography using a microelectromechanical system spatial light modulator (MEMS-SLM) can provide reconstructed images with an enlarged screen size and an increased viewing zone angle. Herein, we propose techniques to enable color image generation for a screen-scanning display system employing a single MEMS-SLM. Higher-order diffraction components generated by the MEMS-SLM for R, G, and B laser lights were coupled by providing proper illumination angles on the MEMS-SLM for each color. An error diffusion technique to binarize the hologram patterns was developed, in which the error diffusion directions were determined for each color. Color reconstructed images with a screen size of 6.2 in. and a viewing zone angle of 10.2° were generated at a frame rate of 30 Hz.

  13. Assessment of errors in static electrical impedance tomography with adjacent and trigonometric current patterns.

    PubMed

    Kolehmainen, V; Vauhkonen, M; Karjalainen, P A; Kaipio, J P

    1997-11-01

    In electrical impedance tomography (EIT), difference imaging is often preferred over static imaging. This is because of the many unknowns in the forward modelling which make it difficult to obtain reliable absolute resistivity estimates. However, static imaging and absolute resistivity values are needed in some potential applications of EIT. In this paper we demonstrate by simulation the effects of different error components that are included in the reconstruction of static EIT images. All simulations are carried out in two dimensions with the so-called complete electrode model. Errors that are considered are the modelling error in the boundary shape of an object, errors in the electrode sizes and localizations and errors in the contact impedances under the electrodes. Results using both adjacent and trigonometric current patterns are given.

  14. Measurement uncertainty evaluation of conicity error inspected on CMM

    NASA Astrophysics Data System (ADS)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  15. Accuracy of modal wavefront estimation from eye transverse aberration measurements

    NASA Astrophysics Data System (ADS)

    Chyzh, Igor H.; Sokurenko, Vyacheslav M.

    2001-01-01

    The influence of random errors in measurement of eye transverse aberrations on the accuracy of reconstructing wave aberration as well as ametropia and astigmatism parameters is investigated. The dependence of mentioned errors on a ratio between the number of measurement points and the number of polynomial coefficients is found for different pupil location of measurement points. Recommendations are proposed for setting these ratios.

  16. A study of the material in the ATLAS inner detector using secondary hadronic interactions

    DOE PAGES

    None, None

    2012-01-13

    The ATLAS inner detector is used to reconstruct secondary vertices due to hadronic interactions of primary collision products, so probing the location and amount of material in the inner region of ATLAS. Data collected in 7 TeV pp collisions at the LHC, with a minimum bias trigger, are used for comparisons with simulated events. The reconstructed secondary vertices have spatial resolutions ranging from ~ 200μm to 1 mm. The overall material description in the simulation is validated to within an experimental uncertainty of about 7%. This will lead to a better understanding of the reconstruction of various objects such asmore » tracks, leptons, jets, and missing transverse momentum.« less

  17. Fast half-sibling population reconstruction: theory and algorithms.

    PubMed

    Dexter, Daniel; Brown, Daniel G

    2013-07-12

    Kinship inference is the task of identifying genealogically related individuals. Kinship information is important for determining mating structures, notably in endangered populations. Although many solutions exist for reconstructing full sibling relationships, few exist for half-siblings. We consider the problem of determining whether a proposed half-sibling population reconstruction is valid under Mendelian inheritance assumptions. We show that this problem is NP-complete and provide a 0/1 integer program that identifies the minimum number of individuals that must be removed from a population in order for the reconstruction to become valid. We also present SibJoin, a heuristic-based clustering approach based on Mendelian genetics, which is strikingly fast. The software is available at http://github.com/ddexter/SibJoin.git+. Our SibJoin algorithm is reasonably accurate and thousands of times faster than existing algorithms. The heuristic is used to infer a half-sibling structure for a population which was, until recently, too large to evaluate.

  18. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  19. Subsurface Grain Morphology Reconstruction by Differential Aperture X-ray Microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenlohr, Philip; Shanthraj, Pratheek; Vande Kieft, Brendan R.

    A multistep, non-destructive grain morphology reconstruction methodology that is applicable to near-surface volumes is developed and tested on synthetic grain structures. This approach probes the subsurface crystal orientation using differential aperture X-ray microscopy (DAXM) on a sparse grid across the microstructure volume of interest. Resulting orientation data is clustered according to proximity in physical and orientation space and used as seed points for an initial Voronoi tessellation to (crudely) approximate the grain morphology. Curvature-driven grain boundary relaxation, simulated by means of the Voronoi Implicit Interface Method (VIIM), progressively improves the reconstruction accuracy. The similarity between bulk and readily accessible surfacemore » reconstruction error provides an objective termination criterion for boundary relaxation.« less

  20. MMSE Estimator for Children’s Speech with Car and Weather Noise

    NASA Astrophysics Data System (ADS)

    Sayuthi, V.

    2018-04-01

    Previous research mentioned that most people need and use vehicles for various purposes, in this recent time and future, as a means of traveling. Many ways can be done in a vehicle, such as for enjoying entertainment, and doing work, so vehicles not just only as a means of traveling. In this study, we will examine the children’s speech from a girl in the vehicle that affected by noise disturbances from the sound source of car noise and the weather sound noise around it, in this case, the rainy weather noise. Vehicle sounds may be from car engine or car air conditioner. The minimum mean square error (MMSE) estimator is used as an attempt to obtain or detect the children’s clear speech by representing simulation research as random process signal that factored by the autocorrelation of both the child’s voice and the disturbance noise signal. This MMSE estimator can be considered as wiener filter as the clear sound are reconstructed again. We expected that the results of this study can help as the basis for development of entertainment or communication technology for passengers of vehicles in the future, particularly using MMSE estimators.

  1. Data analysis in emission tomography using emission-count posteriors

    NASA Astrophysics Data System (ADS)

    Sitek, Arkadiusz

    2012-11-01

    A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.

  2. Two-step reconstruction method using global optimization and conjugate gradient for ultrasound-guided diffuse optical tomography.

    PubMed

    Tavakoli, Behnoosh; Zhu, Quing

    2013-01-01

    Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.

  3. Bayesian Inference for Source Reconstruction: A Real-World Application

    PubMed Central

    Yee, Eugene; Hoffman, Ian; Ungar, Kurt

    2014-01-01

    This paper applies a Bayesian probabilistic inferential methodology for the reconstruction of the location and emission rate from an actual contaminant source (emission from the Chalk River Laboratories medical isotope production facility) using a small number of activity concentration measurements of a noble gas (Xenon-133) obtained from three stations that form part of the International Monitoring System radionuclide network. The sampling of the resulting posterior distribution of the source parameters is undertaken using a very efficient Markov chain Monte Carlo technique that utilizes a multiple-try differential evolution adaptive Metropolis algorithm with an archive of past states. It is shown that the principal difficulty in the reconstruction lay in the correct specification of the model errors (both scale and structure) for use in the Bayesian inferential methodology. In this context, two different measurement models for incorporation of the model error of the predicted concentrations are considered. The performance of both of these measurement models with respect to their accuracy and precision in the recovery of the source parameters is compared and contrasted. PMID:27379292

  4. Methods of evaluating the effects of coding on SAR data

    NASA Technical Reports Server (NTRS)

    Dutkiewicz, Melanie; Cumming, Ian

    1993-01-01

    It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important.

  5. Three-dimensional modeling and animation of two carpal bones: a technique.

    PubMed

    Green, Jason K; Werner, Frederick W; Wang, Haoyu; Weiner, Marsha M; Sacks, Jonathan M; Short, Walter H

    2004-05-01

    The objectives of this study were to (a). create 3D reconstructions of two carpal bones from single CT data sets and animate these bones with experimental in vitro motion data collected during dynamic loading of the wrist joint, (b). develop a technique to calculate the minimum interbone distance between the two carpal bones, and (c). validate the interbone distance calculation process. This method utilized commercial software to create the animations and an in-house program to interface with three-dimensional CAD software to calculate the minimum distance between the irregular geometries of the bones. This interbone minimum distance provides quantitative information regarding the motion of the bones studied and may help to understand and quantify the effects of ligamentous injury.

  6. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.

  7. Research on compressive sensing reconstruction algorithm based on total variation model

    NASA Astrophysics Data System (ADS)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  8. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  9. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  10. Reconstruction of reflectance data using an interpolation technique.

    PubMed

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  11. Knee joint position sense ability in elite athletes who have returned to international level play following ACL reconstruction: A cross-sectional study.

    PubMed

    Relph, Nicola; Herrington, Lee

    2016-12-01

    Following an ACL injury, reconstruction (ACL-R) and rehabilitation, athletes may return to play with a proprioceptive deficit. However, literature is lacking to support this hypothesis in elite athletic groups who have returned to international levels of performance. It is possible the potentially heightened proprioceptive ability evidenced in athletes may negate a deficit following injury. The purpose of this study was to consider the effects of ACL injury, reconstruction and rehabilitation on knee joint position sense (JPS) on a group of elite athletes who had returned to international performance. Using a cross-sectional design ten elite athletes with ACL-R and ten controls were evaluated. JPS was tested into knee extension and flexion using absolute error scores. Average data with 95% confidence intervals between the reconstructed, contralateral and uninjured control knees were analyzed using t-tests and effect sizes. The reconstructed knee of the injured group demonstrated significantly greater angle of error scores when compared to the contralateral and uninjured control into knee flexion (p=0.0001, r=0.98) and knee extension (p=0.0001, r=0.91). There were no significant differences between the contralateral uninjured knee of the injured group and the uninjured control group. Elite athletes who have had an ACL injury, reconstruction, rehabilitation and returned to international play demonstrate lower JPS ability compared to control groups. It is unclear if this deficiency affects long-term performance or secondary injury and re-injury problems. In the future physical therapists should monitor athletes longitudinally when they return to play. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  13. Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline

    NASA Technical Reports Server (NTRS)

    Becsy, Bence; Raffai, Peter; Cornish, Neil; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore

    2017-01-01

    We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.1deg to30.3deg. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N(sub net), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N(sub net) approx. = 20 and S/N(sub net) approx. = 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.

  14. Preoperative implant selection for unilateral breast reconstruction using 3D imaging with the Microsoft Kinect sensor.

    PubMed

    Pöhlmann, Stefanie T L; Harkness, Elaine; Taylor, Christopher J; Gandhi, Ashu; Astley, Susan M

    2017-08-01

    This study aimed to investigate whether breast volume measured preoperatively using a Kinect 3D sensor could be used to determine the most appropriate implant size for reconstruction. Ten patients underwent 3D imaging before and after unilateral implant-based reconstruction. Imaging used seven configurations, varying patient pose and Kinect location, which were compared regarding suitability for volume measurement. Four methods of defining the breast boundary for automated volume calculation were compared, and repeatability assessed over five repetitions. The most repeatable breast boundary annotation used an ellipse to track the inframammary fold and a plane describing the chest wall (coefficient of repeatability: 70 ml). The most reproducible imaging position comparing pre- and postoperative volume measurement of the healthy breast was achieved for the sitting patient with elevated arms and Kinect centrally positioned (coefficient of repeatability: 141 ml). Optimal implant volume was calculated by correcting used implant volume by the observed postoperative asymmetry. It was possible to predict implant size using a linear model derived from preoperative volume measurement of the healthy breast (coefficient of determination R 2  = 0.78, standard error of prediction 120 ml). Mastectomy specimen weight and experienced surgeons' choice showed similar predictive ability (both: R 2  = 0.74, standard error: 141/142 ml). A leave one-out validation showed that in 61% of cases, 3D imaging could predict implant volume to within 10%; however for 17% of cases it was >30%. This technology has the potential to facilitate reconstruction surgery planning and implant procurement to maximise symmetry after unilateral reconstruction. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. Little Boy replication: justification and construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malenfant, R.E.

    A reconstruction of the Little Boy weapon allowed experiments to evaluate yield, leakage measurements for comparison with calculations, and phenomenological measurements to evaluate various in-situ dosimeters. The reconstructed weapon was operated at sustained delayed critical at the Los Alamos Critical Assembly Facility. The present experiments provide a wealth of information to benchmark calculations and demonstrate that the 1965 measurements on the Ichiban assembly (a spherical mockup of Little Boy) were in error. 5 references, 2 figures.

  16. Restoration of the Patient-Specific Anatomy of the Proximal and Distal Parts of the Humerus: Statistical Shape Modeling Versus Contralateral Registration Method.

    PubMed

    Vlachopoulos, Lazaros; Lüthi, Marcel; Carrillo, Fabio; Gerber, Christian; Székely, Gábor; Fürnstahl, Philipp

    2018-04-18

    In computer-assisted reconstructive surgeries, the contralateral anatomy is established as the best available reconstruction template. However, existing intra-individual bilateral differences or a pathological, contralateral humerus may limit the applicability of the method. The aim of the study was to evaluate whether a statistical shape model (SSM) has the potential to predict accurately the pretraumatic anatomy of the humerus from the posttraumatic condition. Three-dimensional (3D) triangular surface models were extracted from the computed tomographic data of 100 paired cadaveric humeri without a pathological condition. An SSM was constructed, encoding the characteristic shape variations among the individuals. To predict the patient-specific anatomy of the proximal (or distal) part of the humerus with the SSM, we generated segments of the humerus of predefined length excluding the part to predict. The proximal and distal humeral prediction (p-HP and d-HP) errors, defined as the deviation of the predicted (bone) model from the original (bone) model, were evaluated. For comparison with the state-of-the-art technique, i.e., the contralateral registration method, we used the same segments of the humerus to evaluate whether the SSM or the contralateral anatomy yields a more accurate reconstruction template. The p-HP error (mean and standard deviation, 3.8° ± 1.9°) using 85% of the distal end of the humerus to predict the proximal humeral anatomy was significantly smaller (p = 0.001) compared with the contralateral registration method. The difference between the d-HP error (mean, 5.5° ± 2.9°), using 85% of the proximal part of the humerus to predict the distal humeral anatomy, and the contralateral registration method was not significant (p = 0.61). The restoration of the humeral length was not significantly different between the SSM and the contralateral registration method. SSMs accurately predict the patient-specific anatomy of the proximal and distal aspects of the humerus. The prediction errors of the SSM depend on the size of the healthy part of the humerus. The prediction of the patient-specific anatomy of the humerus is of fundamental importance for computer-assisted reconstructive surgeries.

  17. RECONSTRUCTING REDSHIFT DISTRIBUTIONS WITH CROSS-CORRELATIONS: TESTS AND AN OPTIMIZED RECIPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Daniel J.; Newman, Jeffrey A., E-mail: djm70@pitt.ed, E-mail: janewman@pitt.ed

    2010-09-20

    Many of the cosmological tests to be performed by planned dark energy experiments will require extremely well-characterized photometric redshift measurements. Current estimates for cosmic shear are that the true mean redshift of the objects in each photo-z bin must be known to better than 0.002(1 + z), and the width of the bin must be known to {approx}0.003(1 + z) if errors in cosmological measurements are not to be degraded significantly. A conventional approach is to calibrate these photometric redshifts with large sets of spectroscopic redshifts. However, at the depths probed by Stage III surveys (such as DES), let alonemore » Stage IV (LSST, JDEM, and Euclid), existing large redshift samples have all been highly (25%-60%) incomplete, with a strong dependence of success rate on both redshift and galaxy properties. A powerful alternative approach is to exploit the clustering of galaxies to perform photometric redshift calibrations. Measuring the two-point angular cross-correlation between objects in some photometric redshift bin and objects with known spectroscopic redshift, as a function of the spectroscopic z, allows the true redshift distribution of a photometric sample to be reconstructed in detail, even if it includes objects too faint for spectroscopy or if spectroscopic samples are highly incomplete. We test this technique using mock DEEP2 Galaxy Redshift survey light cones constructed from the Millennium Simulation semi-analytic galaxy catalogs. From this realistic test, which incorporates the effects of galaxy bias evolution and cosmic variance, we find that the true redshift distribution of a photometric sample can, in fact, be determined accurately with cross-correlation techniques. We also compare the empirical error in the reconstruction of redshift distributions to previous analytic predictions, finding that additional components must be included in error budgets to match the simulation results. This extra error contribution is small for surveys that sample large areas of sky (>{approx}10{sup 0}-100{sup 0}), but dominant for {approx}1 deg{sup 2} fields. We conclude by presenting a step-by-step, optimized recipe for reconstructing redshift distributions from cross-correlation information using standard correlation measurements.« less

  18. A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.

    PubMed

    Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang

    2014-07-31

    In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.

  19. Respiratory gating based on internal electromagnetic motion monitoring during stereotactic liver radiation therapy: First results.

    PubMed

    Poulsen, Per Rugaard; Worm, Esben Schjødt; Hansen, Rune; Larsen, Lars Peter; Grau, Cai; Høyer, Morten

    2015-01-01

    Intrafraction motion may compromise the target dose in stereotactic body radiation therapy (SBRT) of tumors in the liver. Respiratory gating can improve the treatment delivery, but gating based on an external surrogate signal may be inaccurate. This is the first paper reporting on respiratory gating based on internal electromagnetic monitoring during liver SBRT. Two patients with solitary liver metastases were treated with respiratory-gated SBRT guided by three implanted electromagnetic transponders. The treatment was delivered in end-exhale with beam-on when the centroid of the three transponders deviated less than 3 mm [left-right (LR) and anterior-posterior (AP) directions] and 4mm [cranio-caudal (CC)] from the planned position. For each treatment fraction, log files were used to determine the transponder motion during beam-on in the actual gated treatments and in simulated treatments without gating. The motion was used to reconstruct the dose to the clinical target volume (CTV) with and without gating. The reduction in D95 (minimum dose to 95% of the CTV) relative to the plan was calculated for both treatment courses. With gating the maximum course mean (standard deviation) geometrical error in any direction was 1.2 mm (1.8 mm). Without gating the course mean error would mainly increase for Patient 1 [to -2.8 mm (1.6 mm) (LR), 7.1 mm (5.8 mm) (CC), -2.6 mm (2.8mm) (AP)] due to a large systematic cranial baseline drift at each fraction. The errors without gating increased only slightly for Patient 2. The reduction in CTV D95 was 0.5% (gating) and 12.1% (non-gating) for Patient 1 and 0.3% (gating) and 1.7% (non-gating) for Patient 2. The mean duty cycle was 55%. Respiratory gating based on internal electromagnetic motion monitoring was performed for two liver SBRT patients. The gating added robustness to the dose delivery and ensured a high CTV dose even in the presence of large intrafraction motion.

  20. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

Top