MRF energy minimization and beyond via dual decomposition.
Komodakis, Nikos; Paragios, Nikos; Tziritas, Georgios
2011-03-01
This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.
MR fingerprinting reconstruction with Kalman filter.
Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping
2017-09-01
Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.
Fast group matching for MR fingerprinting reconstruction.
Cauley, Stephen F; Setsompop, Kawin; Ma, Dan; Jiang, Yun; Ye, Huihui; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L
2015-08-01
MR fingerprinting (MRF) is a technique for quantitative tissue mapping using pseudorandom measurements. To estimate tissue properties such as T1 , T2 , proton density, and B0 , the rapidly acquired data are compared against a large dictionary of Bloch simulations. This matching process can be a very computationally demanding portion of MRF reconstruction. We introduce a fast group matching algorithm (GRM) that exploits inherent correlation within MRF dictionaries to create highly clustered groupings of the elements. During matching, a group specific signature is first used to remove poor matching possibilities. Group principal component analysis (PCA) is used to evaluate all remaining tissue types. In vivo 3 Tesla brain data were used to validate the accuracy of our approach. For a trueFISP sequence with over 196,000 dictionary elements, 1000 MRF samples, and image matrix of 128 × 128, GRM was able to map MR parameters within 2s using standard vendor computational resources. This is an order of magnitude faster than global PCA and nearly two orders of magnitude faster than direct matching, with comparable accuracy (1-2% relative error). The proposed GRM method is a highly efficient model reduction technique for MRF matching and should enable clinically relevant reconstruction accuracy and time on standard vendor computational resources. © 2014 Wiley Periodicals, Inc.
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Multiscale Reconstruction for Magnetic Resonance Fingerprinting
Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.
2015-01-01
Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462
Change Detection of Remote Sensing Images by Dt-Cwt and Mrf
NASA Astrophysics Data System (ADS)
Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.
2017-05-01
Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.
2015-04-01
Current routine MRI examinations rely on the acquisition of qualitative images that have a contrast "weighted" for a mixture of (magnetic) tissue properties. Recently, a novel approach was introduced, namely MR Fingerprinting (MRF) with a completely different approach to data acquisition, post-processing and visualization. Instead of using a repeated, serial acquisition of data for the characterization of individual parameters of interest, MRF uses a pseudo randomized acquisition that causes the signals from different tissues to have a unique signal evolution or 'fingerprint' that is simultaneously a function of the multiple material properties under investigation. The processing after acquisition involves a pattern recognition algorithm to match the fingerprints to a predefined dictionary of predicted signal evolutions. These can then be translated into quantitative maps of the magnetic parameters of interest. MR Fingerprinting (MRF) is a technique that could theoretically be applied to most traditional qualitative MRI methods and replaces them with acquisition of truly quantitative tissue measures. MRF is, thereby, expected to be much more accurate and reproducible than traditional MRI and should improve multi-center studies and significantly reduce reader bias when diagnostic imaging is performed. Key Points • MR fingerprinting (MRF) is a new approach to data acquisition, post-processing and visualization.• MRF provides highly accurate quantitative maps of T1, T2, proton density, diffusion.• MRF may offer multiparametric imaging with high reproducibility, and high potential for multicenter/ multivendor studies.
Ostenson, Jason; Robison, Ryan K; Zwart, Nicholas R; Welch, E Brian
2017-09-01
Magnetic resonance fingerprinting (MRF) pulse sequences often employ spiral trajectories for data readout. Spiral k-space acquisitions are vulnerable to blurring in the spatial domain in the presence of static field off-resonance. This work describes a blurring correction algorithm for use in spiral MRF and demonstrates its effectiveness in phantom and in vivo experiments. Results show that image quality of T1 and T2 parametric maps is improved by application of this correction. This MRF correction has negligible effect on the concordance correlation coefficient and improves coefficient of variation in regions of off-resonance relative to uncorrected measurements. Copyright © 2017 Elsevier Inc. All rights reserved.
Multiscale reconstruction for MR fingerprinting.
Pierre, Eric Y; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A
2016-06-01
To reduce the acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in vivo data using the highly undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD), and B0 field variations in the brain was achieved in vivo for a 256 × 256 matrix for a total acquisition time of 10.2 s, representing a three-fold reduction in acquisition time. The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. Magn Reson Med 75:2481-2492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
An open source multivariate framework for n-tissue segmentation with evaluation on public data.
Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C
2011-12-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.
An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data
Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.
2012-01-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993
NASA Astrophysics Data System (ADS)
Zhou, Lifan; Chai, Dengfeng; Xia, Yu; Ma, Peifeng; Lin, Hui
2018-01-01
Phase unwrapping (PU) is one of the key processes in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) data. It is known that two-dimensional (2-D) PU problems can be formulated as maximum a posteriori estimation of Markov random fields (MRFs). However, considering that the traditional MRF algorithm is usually defined on a rectangular grid, it fails easily if large parts of the wrapped data are dominated by noise caused by large low-coherence area or rapid-topography variation. A PU solution based on sparse MRF is presented to extend the traditional MRF algorithm to deal with sparse data, which allows the unwrapping of InSAR data dominated by high phase noise. To speed up the graph cuts algorithm for sparse MRF, we designed dual elementary graphs and merged them to obtain the Delaunay triangle graph, which is used to minimize the energy function efficiently. The experiments on simulated and real data, compared with other existing algorithms, both confirm the effectiveness of the proposed MRF approach, which suffers less from decorrelation effects caused by large low-coherence area or rapid-topography variation.
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
Jiang, Yun; Ma, Dan; Bhat, Himanshu; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L; Setsompop, Kawin; Griswold, Mark A
2017-11-01
The purpose of this study is to accelerate an MR fingerprinting (MRF) acquisition by using a simultaneous multislice method. A multiband radiofrequency (RF) pulse was designed to excite two slices with different flip angles and phases. The signals of two slices were driven to be as orthogonal as possible. The mixed and undersampled MRF signal was matched to two dictionaries to retrieve T 1 and T 2 maps of each slice. Quantitative results from the proposed method were validated with the gold-standard spin echo methods in a phantom. T 1 and T 2 maps of in vivo human brain from two simultaneously acquired slices were also compared to the results of fast imaging with steady-state precession based MRF method (MRF-FISP) with a single-band RF excitation. The phantom results showed that the simultaneous multislice imaging MRF-FISP method quantified the relaxation properties accurately compared to the gold-standard spin echo methods. T 1 and T 2 values of in vivo brain from the proposed method also matched the results from the normal MRF-FISP acquisition. T 1 and T 2 values can be quantified at a multiband acceleration factor of two using our proposed acquisition even in a single-channel receive coil. Further acceleration could be achieved by combining this method with parallel imaging or iterative reconstruction. Magn Reson Med 78:1870-1876, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.
Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza
2012-05-01
Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
Ye, Huihui; Ma, Dan; Jiang, Yun; Cauley, Stephen F.; Du, Yiping; Wald, Lawrence L.; Griswold, Mark A.; Setsompop, Kawin
2015-01-01
Purpose We incorporate Simultaneous Multi-Slice (SMS) acquisition into MR Fingerprinting (MRF) to accelerate the MRF acquisition. Methods The t-Blipped SMS-MRF method is achieved by adding a Gz blip before each data acquisition window and balancing it with a Gz blip of opposing polarity at the end of each TR. Thus the signal from different simultaneously excited slices are encoded with different phases without disturbing the signal evolution. Further, by varying the Gz blip area and/or polarity as a function of TR, the slices’ differential phase can also be made to vary as a function of time. For reconstruction of t-Blipped SMS-MRF data, we demonstrate a combined slice-direction SENSE and modified dictionary matching method. Results In Monte Carlo simulation, the parameter mapping from Multi-band factor (MB)=2 t-Blipped SMS-MRF shows good accuracy and precision when compared to results from reference conventional MRF data with concordance correlation coefficients (CCC) of 0.96 for T1 estimates and 0.90 for T2 estimates. For in vivo experiments, T1 and T2 maps from MB=2 t-Blipped SMS-MRF have a high agreement with ones from conventional MRF. Conclusions The MB=2 t-Blipped SMS-MRF acquisition/reconstruction method has been demonstrated and validated to provide more rapid parameter mapping in the MRF framework. PMID:26059430
Ye, Huihui; Ma, Dan; Jiang, Yun; Cauley, Stephen F; Du, Yiping; Wald, Lawrence L; Griswold, Mark A; Setsompop, Kawin
2016-05-01
We incorporate simultaneous multislice (SMS) acquisition into MR fingerprinting (MRF) to accelerate the MRF acquisition. The t-Blipped SMS-MRF method is achieved by adding a Gz blip before each data acquisition window and balancing it with a Gz blip of opposing polarity at the end of each TR. Thus the signal from different simultaneously excited slices are encoded with different phases without disturbing the signal evolution. Furthermore, by varying the Gz blip area and/or polarity as a function of repetition time, the slices' differential phase can also be made to vary as a function of time. For reconstruction of t-Blipped SMS-MRF data, we demonstrate a combined slice-direction SENSE and modified dictionary matching method. In Monte Carlo simulation, the parameter mapping from multiband factor (MB) = 2 t-Blipped SMS-MRF shows good accuracy and precision when compared with results from reference conventional MRF data with concordance correlation coefficients (CCC) of 0.96 for T1 estimates and 0.90 for T2 estimates. For in vivo experiments, T1 and T2 maps from MB=2 t-Blipped SMS-MRF have a high agreement with ones from conventional MRF. The MB=2 t-Blipped SMS-MRF acquisition/reconstruction method has been demonstrated and validated to provide more rapid parameter mapping in the MRF framework. © 2015 Wiley Periodicals, Inc.
Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.
Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei
2017-09-22
The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.
MR fingerprinting for rapid quantification of myocardial T1 , T2 , and proton spin density.
Hamilton, Jesse I; Jiang, Yun; Chen, Yong; Ma, Dan; Lo, Wei-Ching; Griswold, Mark; Seiberlich, Nicole
2017-04-01
To introduce a two-dimensional MR fingerprinting (MRF) technique for quantification of T 1 , T 2 , and M 0 in myocardium. An electrocardiograph-triggered MRF method is introduced for mapping myocardial T 1 , T 2 , and M 0 during a single breath-hold in as short as four heartbeats. The pulse sequence uses variable flip angles, repetition times, inversion recovery times, and T 2 preparation dephasing times. A dictionary of possible signal evolutions is simulated for each scan that incorporates the subject's unique variations in heart rate. Aspects of the sequence design were explored in simulations, and the accuracy and precision of cardiac MRF were assessed in a phantom study. In vivo imaging was performed at 3 Tesla in 11 volunteers to generate native parametric maps. T 1 and T 2 measurements from the proposed cardiac MRF sequence correlated well with standard spin echo measurements in the phantom study (R 2 > 0.99). A Bland-Altman analysis revealed good agreement for myocardial T 1 measurements between MRF and MOLLI (bias 1 ms, 95% limits of agreement -72 to 72 ms) and T 2 measurements between MRF and T 2 -prepared balanced steady-state free precession (bias, -2.6 ms; 95% limits of agreement, -8.5 to 3.3 ms). MRF can provide quantitative single slice T 1 , T 2 , and M 0 maps in the heart within a single breath-hold. Magn Reson Med 77:1446-1458, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Goetz, Laurent; Piallat, Brigitte; Bhattacharjee, Manik; Mathieu, Hervé; David, Olivier; Chabardès, Stéphan
2016-05-04
The mesencephalic reticular formation (MRF) is formed by the pedunculopontine and cuneiform nuclei, two neuronal structures thought to be key elements in the supraspinal control of locomotion, muscle tone, waking, and REM sleep. The role of MRF has also been advocated in modulation of state of arousal leading to transition from wakefulness to sleep and it is further considered to be a main player in the pathophysiology of gait disorders seen in Parkinson's disease. However, the existence of a mesencephalic locomotor region and of an arousal center has not yet been demonstrated in primates. Here, we provide the first extensive electrophysiological mapping of the MRF using extracellular recordings at rest and during locomotion in a nonhuman primate (NHP) (Macaca fascicularis) model of bipedal locomotion. We found different neuronal populations that discharged according to a phasic or a tonic mode in response to locomotion, supporting the existence of a locomotor neuronal circuit within these MRF in behaving primates. Altogether, these data constitute the first electrophysiological characterization of a locomotor neuronal system present within the MRF in behaving NHPs under normal conditions, in accordance with several studies done in different experimental animal models. We provide the first extensive electrophysiological mapping of the two major components of the mesencephalic reticular formation (MRF), namely the pedunculopontine and cuneiform nuclei. We exploited a nonhuman primate (NHP) model of bipedal locomotion with extracellular recordings in behaving NHPs at rest and during locomotion. Different MRF neuronal groups were found to respond to locomotion, with phasic or tonic patterns of response. These data constitute the first electrophysiological evidences of a locomotor neuronal system within the MRF in behaving NHPs. Copyright © 2016 the authors 0270-6474/16/364917-13$15.00/0.
Fast dictionary generation and searching for magnetic resonance fingerprinting.
Jun Xie; Mengye Lyu; Jian Zhang; Hui, Edward S; Wu, Ed X; Ze Wang
2017-07-01
A super-fast dictionary generation and searching (DGS) algorithm was developed for MR parameter quantification using magnetic resonance fingerprinting (MRF). MRF is a new technique for simultaneously quantifying multiple MR parameters using one temporally resolved MR scan. But it has a multiplicative computation complexity, resulting in a big burden of dictionary generating, saving, and retrieving, which can easily be intractable for any state-of-art computers. Based on retrospective analysis of the dictionary matching object function, a multi-scale ZOOM like DGS algorithm, dubbed as MRF-ZOOM, was proposed. MRF ZOOM is quasi-parameter-separable so the multiplicative computation complexity is broken into additive one. Evaluations showed that MRF ZOOM was hundreds or thousands of times faster than the original MRF parameter quantification method even without counting the dictionary generation time in. Using real data, it yielded nearly the same results as produced by the original method. MRF ZOOM provides a super-fast solution for MR parameter quantification.
A Part-Of-Speech term weighting scheme for biomedical information retrieval.
Wang, Yanshan; Wu, Stephen; Li, Dingcheng; Mehrabi, Saeed; Liu, Hongfang
2016-10-01
In the era of digitalization, information retrieval (IR), which retrieves and ranks documents from large collections according to users' search queries, has been popularly applied in the biomedical domain. Building patient cohorts using electronic health records (EHRs) and searching literature for topics of interest are some IR use cases. Meanwhile, natural language processing (NLP), such as tokenization or Part-Of-Speech (POS) tagging, has been developed for processing clinical documents or biomedical literature. We hypothesize that NLP can be incorporated into IR to strengthen the conventional IR models. In this study, we propose two NLP-empowered IR models, POS-BoW and POS-MRF, which incorporate automatic POS-based term weighting schemes into bag-of-word (BoW) and Markov Random Field (MRF) IR models, respectively. In the proposed models, the POS-based term weights are iteratively calculated by utilizing a cyclic coordinate method where golden section line search algorithm is applied along each coordinate to optimize the objective function defined by mean average precision (MAP). In the empirical experiments, we used the data sets from the Medical Records track in Text REtrieval Conference (TREC) 2011 and 2012 and the Genomics track in TREC 2004. The evaluation on TREC 2011 and 2012 Medical Records tracks shows that, for the POS-BoW models, the mean improvement rates for IR evaluation metrics, MAP, bpref, and P@10, are 10.88%, 4.54%, and 3.82%, compared to the BoW models; and for the POS-MRF models, these rates are 13.59%, 8.20%, and 8.78%, compared to the MRF models. Additionally, we experimentally verify that the proposed weighting approach is superior to the simple heuristic and frequency based weighting approaches, and validate our POS category selection. Using the optimal weights calculated in this experiment, we tested the proposed models on the TREC 2004 Genomics track and obtained average of 8.63% and 10.04% improvement rates for POS-BoW and POS-MRF, respectively. These significant improvements verify the effectiveness of leveraging POS tagging for biomedical IR tasks. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong
2016-03-01
In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
A prior feature SVM – MRF based method for mouse brain segmentation
Wu, Teresa; Bae, Min Hyeok; Zhang, Min; Pan, Rong; Badea, Alexandra
2012-01-01
We introduce an automated method, called prior feature Support Vector Machine- Markov Random Field (pSVMRF), to segment three-dimensional mouse brain Magnetic Resonance Microscopy (MRM) images. Our earlier work, extended MRF (eMRF) integrated Support Vector Machine (SVM) and Markov Random Field (MRF) approaches, leading to improved segmentation accuracy; however, the computation of eMRF is very expensive, which may limit its performance on segmentation and robustness. In this study pSVMRF reduces training and testing time for SVM, while boosting segmentation performance. Unlike the eMRF approach, where MR intensity information and location priors are linearly combined, pSVMRF combines this information in a nonlinear fashion, and enhances the discriminative ability of the algorithm. We validate the proposed method using MR imaging of unstained and actively stained mouse brain specimens, and compare segmentation accuracy with two existing methods: eMRF and MRF. C57BL/6 mice are used for training and testing, using cross validation. For formalin fixed C57BL/6 specimens, pSVMRF outperforms both eMRF and MRF. The segmentation accuracy for C57BL/6 brains, stained or not, was similar for larger structures like hippocampus and caudate putamen, (~87%), but increased substantially for smaller regions like susbtantia nigra (from 78.36% to 91.55%), and anterior commissure (from ~50% to ~80%). To test segmentation robustness against increased anatomical variability we add two strains, BXD29 and a transgenic mouse model of Alzheimer’s Disease. Segmentation accuracy for new strains is 80% for hippocampus, and caudate putamen, indicating that pSVMRF is a promising approach for phenotyping mouse models of human brain disorders. PMID:21988893
A prior feature SVM-MRF based method for mouse brain segmentation.
Wu, Teresa; Bae, Min Hyeok; Zhang, Min; Pan, Rong; Badea, Alexandra
2012-02-01
We introduce an automated method, called prior feature Support Vector Machine-Markov Random Field (pSVMRF), to segment three-dimensional mouse brain Magnetic Resonance Microscopy (MRM) images. Our earlier work, extended MRF (eMRF) integrated Support Vector Machine (SVM) and Markov Random Field (MRF) approaches, leading to improved segmentation accuracy; however, the computation of eMRF is very expensive, which may limit its performance on segmentation and robustness. In this study pSVMRF reduces training and testing time for SVM, while boosting segmentation performance. Unlike the eMRF approach, where MR intensity information and location priors are linearly combined, pSVMRF combines this information in a nonlinear fashion, and enhances the discriminative ability of the algorithm. We validate the proposed method using MR imaging of unstained and actively stained mouse brain specimens, and compare segmentation accuracy with two existing methods: eMRF and MRF. C57BL/6 mice are used for training and testing, using cross validation. For formalin fixed C57BL/6 specimens, pSVMRF outperforms both eMRF and MRF. The segmentation accuracy for C57BL/6 brains, stained or not, was similar for larger structures like hippocampus and caudate putamen, (~87%), but increased substantially for smaller regions like susbtantia nigra (from 78.36% to 91.55%), and anterior commissure (from ~50% to ~80%). To test segmentation robustness against increased anatomical variability we add two strains, BXD29 and a transgenic mouse model of Alzheimer's disease. Segmentation accuracy for new strains is 80% for hippocampus, and caudate putamen, indicating that pSVMRF is a promising approach for phenotyping mouse models of human brain disorders. Copyright © 2011 Elsevier Inc. All rights reserved.
Ye, Huihui; Cauley, Stephen F; Gagoski, Borjan; Bilgic, Berkin; Ma, Dan; Jiang, Yun; Du, Yiping P; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin
2017-05-01
To develop a reconstruction method to improve SMS-MRF, in which slice acceleration is used in conjunction with highly undersampled in-plane acceleration to speed up MRF acquisition. In this work two methods are employed to efficiently perform the simultaneous multislice magnetic resonance fingerprinting (SMS-MRF) data acquisition and the direct-spiral slice-GRAPPA (ds-SG) reconstruction. First, the lengthy training data acquisition is shortened by employing the through-time/through-k-space approach, in which similar k-space locations within and across spiral interleaves are grouped and are associated with a single set of kernel. Second, inversion recovery preparation (IR prepped), variable flip angle (FA), and repetition time (TR) are used for the acquisition of the training data, to increase signal variation and to improve the conditioning of the kernel fitting. The grouping of k-space locations enables a large reduction in the number of kernels required, and the IR-prepped training data with variable FA and TR provide improved ds-SG kernels and reconstruction performance. With direct-spiral slice-GRAPPA, tissue parameter maps comparable to that of conventional MRF were obtained at multiband (MB) = 3 acceleration using t-blipped SMS-MRF acquisition with 32-channel head coil at 3 Tesla (T). The proposed reconstruction scheme allows MB = 3 accelerated SMS-MRF imaging with high-quality T 1 , T 2 , and off-resonance maps, and can be used to significantly shorten MRF acquisition and aid in its adoption in neuro-scientific and clinical settings. Magn Reson Med 77:1966-1974, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
Robust sliding-window reconstruction for Accelerating the acquisition of MR fingerprinting.
Cao, Xiaozhi; Liao, Congyu; Wang, Zhixing; Chen, Ying; Ye, Huihui; He, Hongjian; Zhong, Jianhui
2017-10-01
To develop a method for accelerated and robust MR fingerprinting (MRF) with improved image reconstruction and parameter matching processes. A sliding-window (SW) strategy was applied to MRF, in which signal and dictionary matching was conducted between fingerprints consisting of mixed-contrast image series reconstructed from consecutive data frames segmented by a sliding window, and a precalculated mixed-contrast dictionary. The effectiveness and performance of this new method, dubbed SW-MRF, was evaluated in both phantom and in vivo. Error quantifications were conducted on results obtained with various settings of SW reconstruction parameters. Compared with the original MRF strategy, the results of both phantom and in vivo experiments demonstrate that the proposed SW-MRF strategy either provided similar accuracy with reduced acquisition time, or improved accuracy with equal acquisition time. Parametric maps of T 1 , T 2 , and proton density of comparable quality could be achieved with a two-fold or more reduction in acquisition time. The effect of sliding-window width on dictionary sensitivity was also estimated. The novel SW-MRF recovers high quality image frames from highly undersampled MRF data, which enables more robust dictionary matching with reduced numbers of data frames. This time efficiency may facilitate MRF applications in time-critical clinical settings. Magn Reson Med 78:1579-1588, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Brain tumor segmentation in 3D MRIs using an improved Markov random field model
NASA Astrophysics Data System (ADS)
Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza
2011-10-01
Markov Random Field (MRF) models have been recently suggested for MRI brain segmentation by a large number of researchers. By employing Markovianity, which represents the local property, MRF models are able to solve a global optimization problem locally. But they still have a heavy computation burden, especially when they use stochastic relaxation schemes such as Simulated Annealing (SA). In this paper, a new 3D-MRF model is put forward to raise the speed of the convergence. Although, search procedure of SA is fairly localized and prevents from exploring the same diversity of solutions, it suffers from several limitations. In comparison, Genetic Algorithm (GA) has a good capability of global researching but it is weak in hill climbing. Our proposed algorithm combines SA and an improved GA (IGA) to optimize the solution which speeds up the computation time. What is more, this proposed algorithm outperforms the traditional 2D-MRF in quality of the solution.
NASA Astrophysics Data System (ADS)
Zhang, Yunfei; Huang, Wen; Zheng, Yongcheng; Ji, Fang; Xu, Min; Duan, Zhixin; Luo, Qing; Liu, Qian; Xiao, Hong
2016-03-01
Zinc sulfide is a kind of typical infrared optical material, commonly produced using single point diamond turning (SPDT). SPDT can efficiently produce zinc sulfide aspheric surfaces with micro-roughness and acceptable figure error. However the tool marks left by the diamond turning process cause high micro-roughness that degrades the optical performance when used in the visible region of the spectrum. Magnetorheological finishing (MRF) is a deterministic, sub-aperture polishing technology that is very helpful in improving both surface micro-roughness and surface figure.This paper mainly investigates the MRF technology of large aperture off-axis aspheric optical surfaces for zinc sulfide. The topological structure and coordinate transformation of a MRF machine tool PKC1200Q2 are analyzed and its kinematics is calculated, then the post-processing algorithm model of MRF for an optical lens is established. By taking the post-processing of off-axis aspheric surfacefor example, a post-processing algorithm that can be used for a raster tool path is deduced and the errors produced by the approximate treatment are analyzed. A polishing algorithm of trajectory planning and dwell time based on matrix equation and optimization theory is presented in this paper. Adopting this algorithm an experiment is performed to machining a large-aperture off-axis aspheric surface on the MRF machine developed by ourselves. After several times' polishing, the figure accuracy PV is proved from 3.3λ to 2.0λ and RMS from 0.451λ to 0.327λ. This algorithm is used to polish the other shapes including spheres, aspheres and prisms.
Fast magnetic resonance fingerprinting for dynamic contrast-enhanced studies in mice.
Gu, Yuning; Wang, Charlie Y; Anderson, Christian E; Liu, Yuchi; Hu, He; Johansen, Mette L; Ma, Dan; Jiang, Yun; Ramos-Estebanez, Ciro; Brady-Kalnay, Susann; Griswold, Mark A; Flask, Chris A; Yu, Xin
2018-05-09
The goal of this study was to develop a fast MR fingerprinting (MRF) method for simultaneous T 1 and T 2 mapping in DCE-MRI studies in mice. The MRF sequences based on balanced SSFP and fast imaging with steady-state precession were implemented and evaluated on a 7T preclinical scanner. The readout used a zeroth-moment-compensated variable-density spiral trajectory that fully sampled the entire k-space and the inner 10 × 10 k-space with 48 and 4 interleaves, respectively. In vitro and in vivo studies of mouse brain were performed to evaluate the accuracy of MRF measurements with both fully sampled and undersampled data. The application of MRF to dynamic T 1 and T 2 mapping in DCE-MRI studies were demonstrated in a mouse model of heterotopic glioblastoma using gadolinium-based and dysprosium-based contrast agents. The T 1 and T 2 measurements in phantom showed strong agreement between the MRF and the conventional methods. The MRF with spiral encoding allowed up to 8-fold undersampling without loss of measurement accuracy. This enabled simultaneous T 1 and T 2 mapping with 2-minute temporal resolution in DCE-MRI studies. Magnetic resonance fingerprinting provides the opportunity for dynamic quantification of contrast agent distribution in preclinical tumor models on high-field MRI scanners. © 2018 International Society for Magnetic Resonance in Medicine.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
A modified method for MRF segmentation and bias correction of MR image with intensity inhomogeneity.
Xie, Mei; Gao, Jingjing; Zhu, Chongjin; Zhou, Yan
2015-01-01
Markov random field (MRF) model is an effective method for brain tissue classification, which has been applied in MR image segmentation for decades. However, it falls short of the expected classification in MR images with intensity inhomogeneity for the bias field is not considered in the formulation. In this paper, we propose an interleaved method joining a modified MRF classification and bias field estimation in an energy minimization framework, whose initial estimation is based on k-means algorithm in view of prior information on MRI. The proposed method has a salient advantage of overcoming the misclassifications from the non-interleaved MRF classification for the MR image with intensity inhomogeneity. In contrast to other baseline methods, experimental results also have demonstrated the effectiveness and advantages of our algorithm via its applications in the real and the synthetic MR images.
Hoppe, Elisabeth; Körzdörfer, Gregor; Würfl, Tobias; Wetzl, Jens; Lugauer, Felix; Pfeuffer, Josef; Maier, Andreas
2017-01-01
The purpose of this work is to evaluate methods from deep learning for application to Magnetic Resonance Fingerprinting (MRF). MRF is a recently proposed measurement technique for generating quantitative parameter maps. In MRF a non-steady state signal is generated by a pseudo-random excitation pattern. A comparison of the measured signal in each voxel with the physical model yields quantitative parameter maps. Currently, the comparison is done by matching a dictionary of simulated signals to the acquired signals. To accelerate the computation of quantitative maps we train a Convolutional Neural Network (CNN) on simulated dictionary data. As a proof of principle we show that the neural network implicitly encodes the dictionary and can replace the matching process.
Optimal mapping of neural-network learning on message-passing multicomputers
NASA Technical Reports Server (NTRS)
Chu, Lon-Chan; Wah, Benjamin W.
1992-01-01
A minimization of learning-algorithm completion time is sought in the present optimal-mapping study of the learning process in multilayer feed-forward artificial neural networks (ANNs) for message-passing multicomputers. A novel approximation algorithm for mappings of this kind is derived from observations of the dominance of a parallel ANN algorithm over its communication time. Attention is given to both static and dynamic mapping schemes for systems with static and dynamic background workloads, as well as to experimental results obtained for simulated mappings on multicomputers with dynamic background workloads.
Aircraft target detection algorithm based on high resolution spaceborne SAR imagery
NASA Astrophysics Data System (ADS)
Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing
2018-03-01
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
Force modeling for incisions into various tissues with MRF haptic master
NASA Astrophysics Data System (ADS)
Kim, Pyunghwa; Kim, Soomin; Park, Young-Dai; Choi, Seung-Bok
2016-03-01
This study proposes a new model to predict the reaction force that occurs in incisions during robot-assisted minimally invasive surgery. The reaction force is fed back to the manipulator by a magneto-rheological fluid (MRF) haptic master, which is featured by a bi-directional clutch actuator. The reaction force feedback provides similar sensations to laparotomy that cannot be provided by a conventional master for surgery. This advantage shortens the training period for robot-assisted minimally invasive surgery and can improve the accuracy of operations. The reaction force modeling of incisions can be utilized in a surgical simulator that provides a virtual reaction force. In this work, in order to model the reaction force during incisions, the energy aspect of the incision process is adopted and analyzed. Each mode of the incision process is classified by the tendency of the energy change, and modeled for realistic real-time application. The reaction force model uses actual reaction force information with three types of actual tissues: hard tissue, medium tissue, and soft tissue. This modeled force is realized by the MRF haptic master through an algorithm based on the position and velocity of a scalpel using two different control methods: an open-loop algorithm and a closed-loop algorithm. The reaction forces obtained from the proposed model are compared with a desired force in time domain.
Magnetic Resonance Fingerprinting with short relaxation intervals.
Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter
2017-09-01
The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially resolved MRF. Copyright © 2017 Elsevier Inc. All rights reserved.
Music-based magnetic resonance fingerprinting to improve patient comfort during MRI examinations.
Ma, Dan; Pierre, Eric Y; Jiang, Yun; Schluchter, Mark D; Setsompop, Kawin; Gulani, Vikas; Griswold, Mark A
2016-06-01
Unpleasant acoustic noise is a drawback of almost every MRI scan. Instead of reducing acoustic noise to improve patient comfort, we propose a technique for mitigating the noise problem by producing musical sounds directly from the switching magnetic fields while simultaneously quantifying multiple important tissue properties. MP3 music files were converted to arbitrary encoding gradients, which were then used with varying flip angles and repetition times in a two- and three-dimensional magnetic resonance fingerprinting (MRF) examination. This new acquisition method, named MRF-Music, was used to quantify T1 , T2 , and proton density maps simultaneously while providing pleasing sounds to the patients. MRF-Music scans improved patient comfort significantly during MRI examinations. The T1 and T2 values measured from phantom are in good agreement with those from the standard spin echo measurements. T1 and T2 values from the brain scan are also close to previously reported values. MRF-Music sequence provides significant improvement in patient comfort compared with the MRF scan and other fast imaging techniques such as echo planar imaging and turbo spin echo scans. It is also a fast and accurate quantitative method that quantifies multiple relaxation parameters simultaneously. Magn Reson Med 75:2303-2314, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Rieger, Benedikt; Akçakaya, Mehmet; Pariente, José C; Llufriu, Sara; Martinez-Heras, Eloy; Weingärtner, Sebastian; Schad, Lothar R
2018-04-27
Magnetic resonance fingerprinting (MRF) is a promising method for fast simultaneous quantification of multiple tissue parameters. The objective of this study is to improve the coverage of MRF based on echo-planar imaging (MRF-EPI) by using a slice-interleaved acquisition scheme. For this, the MRF-EPI is modified to acquire several slices in a randomized interleaved manner, increasing the effective repetition time of the spoiled gradient echo readout acquisition in each slice. Per-slice matching of the signal-trace to a precomputed dictionary allows the generation of T 1 and T 2 * maps with integrated B 1 + correction. Subsequent compensation for the coil sensitivity profile and normalization to the cerebrospinal fluid additionally allows for quantitative proton density (PD) mapping. Numerical simulations are performed to optimize the number of interleaved slices. Quantification accuracy is validated in phantom scans and feasibility is demonstrated in-vivo. Numerical simulations suggest the acquisition of four slices as a trade-off between quantification precision and scan-time. Phantom results indicate good agreement with reference measurements (Difference T 1 : -2.4 ± 1.1%, T 2 *: -0.5 ± 2.5%, PD: -0.5 ± 7.2%). In-vivo whole-brain coverage of T 1 , T 2 * and PD with 32 slices was acquired within 3:36 minutes, resulting in parameter maps of high visual quality and comparable performance with single-slice MRF-EPI at 4-fold scan-time reduction.
NASA Astrophysics Data System (ADS)
Zhong, Xianyun; Fan, Bin; Wu, Fan
2017-08-01
The corrective calibration of the removal function plays an important role in the magnetorheological finishing (MRF) high-accuracy process. This paper mainly investigates the asymmetrical characteristic of the MRF removal function shape and further analyzes its influence on the surface residual error by means of an iteration algorithm and simulations. By comparing the ripple errors and convergence ratios based on the ideal MRF tool function and the deflected tool function, the mathematical models for calibrating the deviation of horizontal and flowing directions are presented. Meanwhile, revised mathematical models for the coordinate transformation of an MRF machine is also established. Furthermore, a Ø140-mm fused silica plane and a Ø196 mm, f/1∶1, fused silica concave sphere samples are taken as the experiments. After two runs, the plane mirror final surface error reaches PV 17.7 nm, RMS 1.75 nm, and the polishing time is 16 min in total; after three runs, the sphere mirror final surfer error reaches RMS 2.7 nm and the polishing time is 70 min in total. The convergence ratios are 96.2% and 93.5%, respectively. The spherical simulation error and the polishing result are almost consistent, which fully validate the efficiency and feasibility of the calibration method of MRF removal function error using for the high-accuracy subaperture optical manufacturing.
History of magnetorheological finishing
NASA Astrophysics Data System (ADS)
Harris, Daniel C.
2011-06-01
Magnetorheological finishing (MRF) is a deterministic method for producing complex optics with figure accuracy <50 nm and surface roughness <1 nm. MRF was invented at the Luikov Institute of Heat and Mass Transfer in Minsk, Belarus in the late 1980s by a team led by William Kordonski. When the Soviet Union opened up, New York businessman Lowell Mintz was invited to Minsk in 1990 to explore possibilities for technology transfer. Mintz was told of the potential for MRF, but did not understand whether it had value. Mintz was referred to Harvey Pollicove at the Center for Optics Manufacturing of the University of Rochester. As a result of their conversation, they sent Prof. Steve Jacobs to visit Minsk and evaluate MRF. From Jacobs' positive findings, and with support from Lowell Mintz, Kordonski and his colleagues were invited in 1993 to work at the Center for Optics Manufacturing with Jacobs and Don Golini to refine MRF technology. A "preprototype" finishing machine was operating by 1994. Prof. Greg Forbes and doctoral student Paul Dumas developed algorithms for deterministic control of MRF. In 1996, Golini recognized the commercial potential of MRF, secured investment capital from Lowell Mintz, and founded QED Technologies. The first commercial MRF machine was unveiled in 1998. It was followed by more advanced models and by groundbreaking subaperture stitching interferometers for metrology. In 2006, QED was acquired by and became a division of Cabot Microelectronics. This paper recounts the history of the development of MRF and the founding of QED Technologies.
Design and multi-physics optimization of rotary MRF brakes
NASA Astrophysics Data System (ADS)
Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan
2018-03-01
Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.
Differential mesodermal expression of two amphioxus MyoD family members (AmphiMRF1 and AmphiMRF2)
NASA Technical Reports Server (NTRS)
Schubert, Michael; Meulemans, Daniel; Bronner-Fraser, Marianne; Holland, Linda Z.; Holland, Nicholas D.
2003-01-01
To explore the evolution of myogenic regulatory factors in chordates, we isolated two MyoD family genes (AmphiMRF1 and AmphiMRF2) from amphioxus. AmphiMRF1 is first expressed at the late gastrula in the paraxial mesoderm. As the first somites form, expression is restricted to their myotomal region. In the early larva, expression is strongest in the most anterior and most posterior somites. AmphiMRF2 transcription begins at mid/late gastrula in the paraxial mesoderm, but never spreads into its most anterior region. Through much of the neurula stage, AmphiMRF2 expression is strong in the myotomal region of all somites except the most anterior pair; by late neurula expression is downregulated except in the most posterior somites forming just rostral to the tail bud. These two MRF genes of amphioxus have partly overlapping patterns of mesodermal expression and evidently duplicated independent of the diversification of the vertebrate MRF family.
Slice profile and B1 corrections in 2D magnetic resonance fingerprinting.
Ma, Dan; Coppo, Simone; Chen, Yong; McGivney, Debra F; Jiang, Yun; Pahwa, Shivani; Gulani, Vikas; Griswold, Mark A
2017-11-01
The goal of this study is to characterize and improve the accuracy of 2D magnetic resonance fingerprinting (MRF) scans in the presence of slice profile (SP) and B 1 imperfections, which are two main factors that affect quantitative results in MRF. The SP and B 1 imperfections are characterized and corrected separately. The SP effect is corrected by simulating the radiofrequency pulse in the dictionary, and the B 1 is corrected by acquiring a B 1 map using the Bloch-Siegert method before each scan. The accuracy, precision, and repeatability of the proposed method are evaluated in phantom studies. The effects of both SP and B 1 imperfections are also illustrated and corrected in the in vivo studies. The SP and B 1 corrections improve the accuracy of the T 1 and T 2 values, independent of the shape of the radiofrequency pulse. The T 1 and T 2 values obtained from different excitation patterns become more consistent after corrections, which leads to an improvement of the robustness of the MRF design. This study demonstrates that MRF is sensitive to both SP and B 1 effects, and that corrections can be made to improve the accuracy of MRF with only a 2-s increase in acquisition time. Magn Reson Med 78:1781-1789, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Magnetic resonance fingerprinting based on realistic vasculature in mice
Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K.; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K.; Thorin, E.; Sakadzic, Sava; Boas, David A.; Lesage, Frédéric
2017-01-01
Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO2), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO2, mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. PMID:28043909
AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.
Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S
2017-09-01
Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.
Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field
NASA Astrophysics Data System (ADS)
Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen
2017-10-01
Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.
SAR Image Change Detection Based on Fuzzy Markov Random Field Model
NASA Astrophysics Data System (ADS)
Zhao, J.; Huang, G.; Zhao, Z.
2018-04-01
Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
NASA Astrophysics Data System (ADS)
Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo
2015-08-01
In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.
Multiparametric estimation of brain hemodynamics with MR fingerprinting ASL.
Su, Pan; Mao, Deng; Liu, Peiying; Li, Yang; Pinho, Marco C; Welch, Babu G; Lu, Hanzhang
2017-11-01
Assessment of brain hemodynamics without exogenous contrast agents is of increasing importance in clinical applications. This study aims to develop an MR perfusion technique that can provide noncontrast and multiparametric estimation of hemodynamic markers. We devised an arterial spin labeling (ASL) method based on the principle of MR fingerprinting (MRF), referred to as MRF-ASL. By taking advantage of the rich information contained in MRF sequence, up to seven hemodynamic parameters can be estimated concomitantly. Feasibility demonstration, flip angle optimization, comparison with Look-Locker ASL, reproducibility test, sensitivity to hypercapnia challenge, and initial clinical application in an intracranial steno-occlusive process, Moyamoya disease, were performed to evaluate this technique. Magnetic resonance fingerprinting ASL provided estimation of up to seven parameters, including B1+, tissue T 1 , cerebral blood flow (CBF), tissue bolus arrival time (BAT), pass-through arterial BAT, pass-through blood volume, and pass-through blood travel time. Coefficients of variation of the estimated parameters ranged from 0.2 to 9.6%. Hypercapnia resulted in an increase in CBF by 57.7%, and a decrease in BAT by 13.7 and 24.8% in tissue and vessels, respectively. Patients with Moyamoya disease showed diminished CBF and lengthened BAT that could not be detected with regular ASL. Magnetic resonance fingerprinting ASL is a promising technique for noncontrast, multiparametric perfusion assessment. Magn Reson Med 78:1812-1823, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A.; Vincent, Jason A.; Dell, Katherine M.; Drumm, Mitchell L.; Brady-Kalnay, Susann M.; Griswold, Mark A.; Flask, Chris A.; Lu, Lan
2015-01-01
High field, preclinical magnetic resonance imaging (MRI) scanners are now commonly used to quantitatively assess disease status and efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical, 7.0 T MRI implementation of the highly novel Magnetic Resonance Fingerprinting (MRF) methodology that has been previously described for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a Fast Imaging with Steady-state Free Precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 minutes. This initial high field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. PMID:25639694
Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A; Vincent, Jason A; Dell, Katherine M; Drumm, Mitchell L; Brady-Kalnay, Susann M; Griswold, Mark A; Flask, Chris A; Lu, Lan
2015-03-01
High-field preclinical MRI scanners are now commonly used to quantitatively assess disease status and the efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical 7.0-T MRI implementation of the highly novel MR fingerprinting (MRF) methodology which has been described previously for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a fast imaging with steady-state free precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 min. This initial high-field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for the quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. Copyright © 2015 John Wiley & Sons, Ltd.
Dynamic graph cuts for efficient inference in Markov Random Fields.
Kohli, Pushmeet; Torr, Philip H S
2007-12-01
Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.
Jones, Sarah E; Stanić, Davor; Dutschmann, Mathias
2016-12-01
The respiratory pattern generator of mammals is anatomically organized in lateral respiratory columns (LRCs) within the brainstem. LRC compartments serve specific functions in respiratory pattern and rhythm generation. While the caudal medullary reticular formation (cMRF) has respiratory functions reportedly related to the mediation of expulsive respiratory reflexes, it remains unclear whether neurons of the cMRF functionally belong to the LRC. In the present study we specifically investigated the respiratory functions of the cMRF. Tract tracing shows that the cMRF has substantial connectivity with key compartments of the LRC, particularly the parafacial respiratory group and the Kölliker-Fuse nuclei. These neurons have a loose topography and are located in the ventral and dorsal cMRF. Systematic mapping of the cMRF with glutamate stimulation revealed potent respiratory modulation of the respiratory motor pattern from both dorsal and ventral injection sites. Pharmacological inhibition of the cMRF with the GABA-receptor agonist isoguvacine produced significant and robust changes to the baseline respiratory motor pattern (decreased laryngeal post-inspiratory and abdominal expiratory motor activity, delayed inspiratory off-switch and increased respiratory frequency) after dorsal cMRF injection, while ventral injections had no effect. The present data indicate that the ventral cMRF is not an integral part of the respiratory pattern generator and merely serves as a relay for sensory and/or higher command-related modulation of respiration. On the contrary, the dorsal aspect of the cMRF clearly has a functional role in respiratory pattern formation. These findings revive the largely abandoned concept of a dorsal respiratory group that contributes to the generation of the respiratory motor pattern.
Magnetic resonance fingerprinting based on realistic vasculature in mice.
Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K; Thorin, Eric; Sakadzic, Sava; Boas, David A; Lesage, Frédéric
2017-04-01
Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO 2 ), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO 2 , mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. Copyright © 2017 Elsevier Inc. All rights reserved.
Magnetic Resonance Fingerprinting
Ma, Dan; Gulani, Vikas; Seiberlich, Nicole; Liu, Kecheng; Sunshine, Jeffrey L.; Duerk, Jeffrey L.; Griswold, Mark A.
2013-01-01
Summary Magnetic Resonance (MR) is an exceptionally powerful and versatile measurement technique. The basic structure of an MR experiment has remained nearly constant for almost 50 years. Here we introduce a novel paradigm, Magnetic Resonance Fingerprinting (MRF) that permits the non-invasive quantification of multiple important properties of a material or tissue simultaneously through a new approach to data acquisition, post-processing and visualization. MRF provides a new mechanism to quantitatively detect and analyze complex changes that can represent physical alterations of a substance or early indicators of disease. MRF can also be used to specifically identify the presence of a target material or tissue, which will increase the sensitivity, specificity, and speed of an MR study, and potentially lead to new diagnostic testing methodologies. When paired with an appropriate pattern recognition algorithm, MRF inherently suppresses measurement errors and thus can improve accuracy compared to previous approaches. PMID:23486058
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Automatic Mrf-Based Registration of High Resolution Satellite Video Data
NASA Astrophysics Data System (ADS)
Platias, C.; Vakalopoulou, M.; Karantzalos, K.
2016-06-01
In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.
A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.
Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi
2015-12-01
Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-01-01
Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831
MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout.
Jiang, Yun; Ma, Dan; Seiberlich, Nicole; Gulani, Vikas; Griswold, Mark A
2015-12-01
This study explores the possibility of using gradient echo-based sequences other than balanced steady-state free precession (bSSFP) in the magnetic resonance fingerprinting (MRF) framework to quantify the relaxation parameters . An MRF method based on a fast imaging with steady-state precession (FISP) sequence structure is presented. A dictionary containing possible signal evolutions with physiological range of T1 and T2 was created using the extended phase graph formalism according to the acquisition parameters. The proposed method was evaluated in a phantom and a human brain. T1 , T2 , and proton density were quantified directly from the undersampled data by the pattern recognition algorithm. T1 and T2 values from the phantom demonstrate that the results of MRF FISP are in good agreement with the traditional gold-standard methods. T1 and T2 values in brain are within the range of previously reported values. MRF-FISP enables a fast and accurate quantification of the relaxation parameters. It is immune to the banding artifact of bSSFP due to B0 inhomogeneities, which could improve the ability to use MRF for applications beyond brain imaging. © 2014 Wiley Periodicals, Inc.
MR Fingerprinting Using Fast Imaging with Steady State Precession (FISP) with Spiral Readout
Jiang, Yun; Ma, Dan; Seiberlich, Nicole; Gulani, Vikas; Griswold, Mark A.
2015-01-01
Purpose This study explores the possibility of using gradient echo based sequences other than bSSFP in the magnetic resonance fingerprinting (MRF) framework to quantify the relaxation parameters. Methods An MRF method based on a fast imaging with steady state precession (FISP) sequence structure is presented. A dictionary containing possible signal evolutions with physiological range of T1 and T2 was created using the extended phase graph (EPG) formalism according to the acquisition parameters. The proposed method was evaluated in a phantom and a human brain. T1, T2 and proton density were quantified directly from the undersampled data by the pattern recognition algorithm. Results T1 and T2 values from the phantom demonstrate that the results of MRF FISP are in good agreement with the traditional gold-standard methods. T1 and T2 values in brain are within the range of previously reported values. Conclusion MRF FISP enables a fast and accurate quantification of the relaxation parameters, while is immune to the banding artifact of bSSFP due to B0 inhomogeneities, which could improve the ability to use MRF for applications beyond brain imaging. PMID:25491018
Table-driven image transformation engine algorithm
NASA Astrophysics Data System (ADS)
Shichman, Marc
1993-04-01
A high speed image transformation engine (ITE) was designed and a prototype built for use in a generic electronic light table and image perspective transformation application code. The ITE takes any linear transformation, breaks the transformation into two passes and resamples the image appropriately for each pass. The system performance is achieved by driving the engine with a set of look up tables computed at start up time for the calculation of pixel output contributions. Anti-aliasing is done automatically in the image resampling process. Operations such as multiplications and trigonometric functions are minimized. This algorithm can be used for texture mapping, image perspective transformation, electronic light table, and virtual reality.
Magnetic resonance fingerprinting.
Ma, Dan; Gulani, Vikas; Seiberlich, Nicole; Liu, Kecheng; Sunshine, Jeffrey L; Duerk, Jeffrey L; Griswold, Mark A
2013-03-14
Magnetic resonance is an exceptionally powerful and versatile measurement technique. The basic structure of a magnetic resonance experiment has remained largely unchanged for almost 50 years, being mainly restricted to the qualitative probing of only a limited set of the properties that can in principle be accessed by this technique. Here we introduce an approach to data acquisition, post-processing and visualization--which we term 'magnetic resonance fingerprinting' (MRF)--that permits the simultaneous non-invasive quantification of multiple important properties of a material or tissue. MRF thus provides an alternative way to quantitatively detect and analyse complex changes that can represent physical alterations of a substance or early indicators of disease. MRF can also be used to identify the presence of a specific target material or tissue, which will increase the sensitivity, specificity and speed of a magnetic resonance study, and potentially lead to new diagnostic testing methodologies. When paired with an appropriate pattern-recognition algorithm, MRF inherently suppresses measurement errors and can thus improve measurement accuracy.
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
2011-01-01
One bottleneck in NMR structure determination lies in the laborious and time-consuming process of side-chain resonance and NOE assignments. Compared to the well-studied backbone resonance assignment problem, automated side-chain resonance and NOE assignments are relatively less explored. Most NOE assignment algorithms require nearly complete side-chain resonance assignments from a series of through-bond experiments such as HCCH-TOCSY or HCCCONH. Unfortunately, these TOCSY experiments perform poorly on large proteins. To overcome this deficiency, we present a novel algorithm, called NASCA (NOE Assignment and Side-Chain Assignment), to automate both side-chain resonance and NOE assignments and to perform high-resolution protein structure determination in the absence of any explicit through-bond experiment to facilitate side-chain resonance assignment, such as HCCH-TOCSY. After casting the assignment problem into a Markov Random Field (MRF), NASCA extends and applies combinatorial protein design algorithms to compute optimal assignments that best interpret the NMR data. The MRF captures the contact map information of the protein derived from NOESY spectra, exploits the backbone structural information determined by RDCs, and considers all possible side-chain rotamers. The complexity of the combinatorial search is reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is employed to find a set of optimal side-chain resonance assignments that best fit the NMR data. These side-chain resonance assignments are then used to resolve the NOE assignment ambiguity and compute high-resolution protein structures. Tests on five proteins show that NASCA assigns resonances for more than 90% of side-chain protons, and achieves about 80% correct assignments. The final structures computed using the NOE distance restraints assigned by NASCA have backbone RMSD 0.8 – 1.5 Å from the reference structures determined by traditional NMR approaches. PMID:21706248
Anderson, Christian E; Wang, Charlie Y; Gu, Yuning; Darrah, Rebecca; Griswold, Mark A; Yu, Xin; Flask, Chris A
2018-04-01
The regularly incremented phase encoding-magnetic resonance fingerprinting (RIPE-MRF) method is introduced to limit the sensitivity of preclinical MRF assessments to pulsatile and respiratory motion artifacts. As compared to previously reported standard Cartesian-MRF methods (SC-MRF), the proposed RIPE-MRF method uses a modified Cartesian trajectory that varies the acquired phase-encoding line within each dynamic MRF dataset. Phantoms and mice were scanned without gating or triggering on a 7T preclinical MRI scanner using the RIPE-MRF and SC-MRF methods. In vitro phantom longitudinal relaxation time (T 1 ) and transverse relaxation time (T 2 ) measurements, as well as in vivo liver assessments of artifact-to-noise ratio (ANR) and MRF-based T 1 and T 2 mean and standard deviation, were compared between the two methods (n = 5). RIPE-MRF showed significant ANR reductions in regions of pulsatility (P < 0.005) and respiratory motion (P < 0.0005). RIPE-MRF also exhibited improved precision in T 1 and T 2 measurements in comparison to the SC-MRF method (P < 0.05). The RIPE-MRF and SC-MRF methods displayed similar mean T 1 and T 2 estimates (difference in mean values < 10%). These results show that the RIPE-MRF method can provide effective motion artifact suppression with minimal impact on T 1 and T 2 accuracy for in vivo small animal MRI studies. Magn Reson Med 79:2176-2182, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Automated Segmentation of Nuclei in Breast Cancer Histopathology Images.
Paramanandam, Maqlin; O'Byrne, Michael; Ghosh, Bidisha; Mammen, Joy John; Manipadam, Marie Therese; Thamburaj, Robinson; Pakrashi, Vikram
2016-01-01
The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E) stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP) algorithm on a Markov Random Field (MRF). The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods-Wienert et al. (2012) and Veta et al. (2013), which were tested using their own datasets.
Automated Segmentation of Nuclei in Breast Cancer Histopathology Images
Paramanandam, Maqlin; O’Byrne, Michael; Ghosh, Bidisha; Mammen, Joy John; Manipadam, Marie Therese; Thamburaj, Robinson; Pakrashi, Vikram
2016-01-01
The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E) stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP) algorithm on a Markov Random Field (MRF). The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods—Wienert et al. (2012) and Veta et al. (2013), which were tested using their own datasets. PMID:27649496
Dwell time algorithm based on the optimization theory for magnetorheological finishing
NASA Astrophysics Data System (ADS)
Zhang, Yunfei; Wang, Yang; Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen
2010-10-01
Magnetorheological finishing (MRF) is an advanced polishing technique capable of rapidly converging to the required surface figure. This process can deterministically control the amount of the material removed by varying a time to dwell at each particular position on the workpiece surface. The dwell time algorithm is one of the most important key techniques of the MRF. A dwell time algorithm based on the1 matrix equation and optimization theory was presented in this paper. The conventional mathematical model of the dwell time was transferred to a matrix equation containing initial surface error, removal function and dwell time function. The dwell time to be calculated was just the solution to the large, sparse matrix equation. A new mathematical model of the dwell time based on the optimization theory was established, which aims to minimize the 2-norm or ∞-norm of the residual surface error. The solution meets almost all the requirements of precise computer numerical control (CNC) without any need for extra data processing, because this optimization model has taken some polishing condition as the constraints. Practical approaches to finding a minimal least-squares solution and a minimal maximum solution are also discussed in this paper. Simulations have shown that the proposed algorithm is numerically robust and reliable. With this algorithm an experiment has been performed on the MRF machine developed by ourselves. After 4.7 minutes' polishing, the figure error of a flat workpiece with a 50 mm diameter is improved by PV from 0.191λ(λ = 632.8 nm) to 0.087λ and RMS 0.041λ to 0.010λ. This algorithm can be constructed to polish workpieces of all shapes including flats, spheres, aspheres, and prisms, and it is capable of improving the polishing figures dramatically.
Range data description based on multiple characteristics
NASA Technical Reports Server (NTRS)
Al-Hujazi, Ezzet; Sood, Arun
1988-01-01
An algorithm for describing range images based on Mean curvature (H) and Gaussian curvature (K) is presented. Range images are unique in that they directly approximate the physical surfaces of a real world 3-D scene. The curvature parameters are derived from the fundamental theorems of differential geometry and provides visible invariant pixel labels that can be used to characterize the scene. The sign of H and K can be used to classify each pixel into one of eight possible surface types. Due to the sensitivity of these parameters to noise the resulting HK-sing map does not directly identify surfaces in the range images and must be further processed. A region growing algorithm based on modeling the scene points with a Markov Random Field (MRF) of variable neighborhood size and edge models is suggested. This approach allows the integration of information from multiple characteristics in an efficient way. The performance of the proposed algorithm on a number of synthetic and real range images is discussed.
Optimum Image Formation for Spaceborne Microwave Radiometer Products.
Long, David G; Brodzik, Mary J
2016-05-01
This paper considers some of the issues of radiometer brightness image formation and reconstruction for use in the NASA-sponsored Calibrated Passive Microwave Daily Equal-Area Scalable Earth Grid 2.0 Brightness Temperature Earth System Data Record project, which generates a multisensor multidecadal time series of high-resolution radiometer products designed to support climate studies. Two primary reconstruction algorithms are considered: the Backus-Gilbert approach and the radiometer form of the scatterometer image reconstruction (SIR) algorithm. These are compared with the conventional drop-in-the-bucket (DIB) gridded image formation approach. Tradeoff study results for the various algorithm options are presented to select optimum values for the grid resolution, the number of SIR iterations, and the BG gamma parameter. We find that although both approaches are effective in improving the spatial resolution of the surface brightness temperature estimates compared to DIB, SIR requires significantly less computation. The sensitivity of the reconstruction to the accuracy of the measurement spatial response function (MRF) is explored. The partial reconstruction of the methods can tolerate errors in the description of the sensor measurement response function, which simplifies the processing of historic sensor data for which the MRF is not known as well as modern sensors. Simulation tradeoff results are confirmed using actual data.
NASA Astrophysics Data System (ADS)
Harris, B. J.; Sun, S. S.; Li, W. H.
2017-03-01
With the growing need for effective intercity transport, the need for more advanced rail vehicle technology has never been greater. The conflicting primary longitudinal suspension requirements of high speed stability and curving performance limit the development of rail vehicle technology. This paper presents a novel magnetorheological fluid based joint with variable stiffness characteristics for the purpose of overcoming this parameter conflict. Firstly, the joint design and working principle is developed. Following this, a prototype is tested by MTS to characterize its variable stiffness properties under a range of conditions. Lastly, the performance of the proposed MRF rubber joint with regard to improving train stability and curving performance is numerically evaluated.
Event-Based Stereo Depth Estimation Using Belief Propagation.
Xie, Zhen; Chen, Shengyong; Orchard, Garrick
2017-01-01
Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are not typically suitable for these event-based data and new processing algorithms are required. This paper focuses on the problem of depth estimation from a stereo pair of event-based sensors. A fully event-based stereo depth estimation algorithm which relies on message passing is proposed. The algorithm not only considers the properties of a single event but also uses a Markov Random Field (MRF) to consider the constraints between the nearby events, such as disparity uniqueness and depth continuity. The method is tested on five different scenes and compared to other state-of-art event-based stereo matching methods. The results show that the method detects more stereo matches than other methods, with each match having a higher accuracy. The method can operate in an event-driven manner where depths are reported for individual events as they are received, or the network can be queried at any time to generate a sparse depth frame which represents the current state of the network.
Frictional forces in material removal for glasses and ceramics using magnetorheological finishing
NASA Astrophysics Data System (ADS)
Miao, Chunlin
Magnetorheological finishing (MRF) spotting experiments on stationary parts are conducted in this work to understand the material removal mechanism in MRF. Drag force and normal force are measured in situ, simultaneously for the first time for a variety of optical materials in MRF. We study material removal process in MRF as a function of material mechanical properties. We experimentally demonstrate that material removal in MRF is strongly related to shear stress. Shear stress is predominantly determined by material mechanical properties. A modified Preston's equation is proposed to estimate the material removal in MRF by combining shear stress and material mechanical properties. We investigate extensively the effect of various MRF process parameters, including abrasive concentration, magnetic field strength, penetration depth and wheel speed, on material removal efficiency. Material removal rate model is expanded to include these parameters. We develop a nonaqueous magnetorheological (MR) fluid for examining the mechanical contribution in MRF material removal. This fluid is based on a combination of two CI particles and a combination of two organic liquids. Material removal with this nonaqueous MR fluid is discussed. We formulate a new corrosion resistant MR fluid which is based on metal oxide coated carbonyl iron (CI) particles. The rheological behavior, stability and corrosion resistance are examined.
Distributed memory parallel Markov random fields using graph partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinemann, C.; Perciano, T.; Ushizima, D.
Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less
Real-time stereo matching using orthogonal reliability-based dynamic programming.
Gong, Minglun; Yang, Yee-Hong
2007-03-01
A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.
Music-Based Magnetic Resonance Fingerprinting to Improve Patient Comfort During MRI Exams
Ma, Dan; Pierre, Eric Y.; Jiang, Yun; Schluchter, Mark D.; Setsompop, Kawin; Gulani, Vikas; Griswold, Mark A.
2015-01-01
Purpose The unpleasant acoustic noise is an important drawback of almost every magnetic resonance imaging scan. Instead of reducing the acoustic noise to improve patient comfort, a method is proposed to mitigate the noise problem by producing musical sounds directly from the switching magnetic fields while simultaneously quantifying multiple important tissue properties. Theory and Methods MP3 music files were converted to arbitrary encoding gradients, which were then used with varying flip angles and TRs in both 2D and 3D MRF exam. This new acquisition method named MRF-Music was used to quantify T1, T2 and proton density maps simultaneously while providing pleasing sounds to the patients. Results The MRF-Music scans were shown to significantly improve the patients' comfort during the MRI scans. The T1 and T2 values measured from phantom are in good agreement with those from the standard spin echo measurements. T1 and T2 values from the brain scan are also close to previously reported values. Conclusions MRF-Music sequence provides significant improvement of the patient's comfort as compared to the MRF scan and other fast imaging techniques such as EPI and TSE scans. It is also a fast and accurate quantitative method that quantifies multiple relaxation parameter simultaneously. PMID:26178439
Transformation of general binary MRF minimization to the first-order case.
Ishikawa, Hiroshi
2011-06-01
We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.
Semi-Automatic Terminology Generation for Information Extraction from German Chest X-Ray Reports.
Krebs, Jonathan; Corovic, Hamo; Dietrich, Georg; Ertl, Max; Fette, Georg; Kaspar, Mathias; Krug, Markus; Stoerk, Stefan; Puppe, Frank
2017-01-01
Extraction of structured data from textual reports is an important subtask for building medical data warehouses for research and care. Many medical and most radiology reports are written in a telegraphic style with a concatenation of noun phrases describing the presence or absence of findings. Therefore a lexico-syntactical approach is promising, where key terms and their relations are recognized and mapped on a predefined standard terminology (ontology). We propose a two-phase algorithm for terminology matching: In the first pass, a local terminology for recognition is derived as close as possible to the terms used in the radiology reports. In the second pass, the local terminology is mapped to a standard terminology. In this paper, we report on an algorithm for the first step of semi-automatic generation of the local terminology and evaluate the algorithm with radiology reports of chest X-ray examinations from Würzburg university hospital. With an effort of about 20 hours work of a radiologist as domain expert and 10 hours for meetings, a local terminology with about 250 attributes and various value patterns was built. In an evaluation with 100 randomly chosen reports it achieved an F1-Score of about 95% for information extraction.
Badve, Chaitra; Yu, Alice; Rogers, Matthew; Ma, Dan; Liu, Yiying; Schluchter, Mark; Sunshine, Jeffrey; Griswold, Mark; Gulani, Vikas
2015-12-01
Magnetic resonance fingerprinting (MRF) is a method of image acquisition that produces multiple MR parametric maps from a single scan. Here, we describe the normal range and progression of MRF-derived relaxometry values with age in healthy individuals. 56 normal volunteers (ages 11-71 years, M:F 24:32) were scanned. Regions of interest were drawn on T 1 and T 2 maps in 38 areas, including lobar and deep white matter, deep gray nuclei, thalami and posterior fossa structures. Relaxometry differences were assessed using a forward stepwise selection of a baseline model including either gender, age, or both, where variables were included if they contributed significantly (p<0.05). Additionally, differences in regional anatomy, including comparisons between hemispheres and between anatomical subcomponents, were assessed by paired t-tests. Using this protocol, MRF-derived T 1 and T 2 in frontal WM regions were found to increase in with age, while occipital and temporal regions remained relatively stable. Deep gray nuclei, including substantia nigra, were found to have age-related decreases in relaxometry. Gender differences were observed in T 1 and T 2 of temporal regions, cerebellum and pons. Males were also found to have more rapid age-related changes in frontal and parietal WM. Regional differences were identified between hemispheres, between genu and splenium of corpus callosum, and between posteromedial and anterolateral thalami. In conclusion, MRF quantification can measure relaxometry trends in healthy individuals that are in agreement with current understanding of neuroanatomy and neurobiology, and has the ability to uncover additional patterns that have not yet been explored.
Badve, Chaitra; Yu, Alice; Rogers, Matthew; Ma, Dan; Liu, Yiying; Schluchter, Mark; Sunshine, Jeffrey; Griswold, Mark; Gulani, Vikas
2016-01-01
Magnetic resonance fingerprinting (MRF) is a method of image acquisition that produces multiple MR parametric maps from a single scan. Here, we describe the normal range and progression of MRF-derived relaxometry values with age in healthy individuals. 56 normal volunteers (ages 11-71 years, M:F 24:32) were scanned. Regions of interest were drawn on T1 and T2 maps in 38 areas, including lobar and deep white matter, deep gray nuclei, thalami and posterior fossa structures. Relaxometry differences were assessed using a forward stepwise selection of a baseline model including either gender, age, or both, where variables were included if they contributed significantly (p<0.05). Additionally, differences in regional anatomy, including comparisons between hemispheres and between anatomical subcomponents, were assessed by paired t-tests. Using this protocol, MRF-derived T1 and T2 in frontal WM regions were found to increase in with age, while occipital and temporal regions remained relatively stable. Deep gray nuclei, including substantia nigra, were found to have age-related decreases in relaxometry. Gender differences were observed in T1 and T2 of temporal regions, cerebellum and pons. Males were also found to have more rapid age-related changes in frontal and parietal WM. Regional differences were identified between hemispheres, between genu and splenium of corpus callosum, and between posteromedial and anterolateral thalami. In conclusion, MRF quantification can measure relaxometry trends in healthy individuals that are in agreement with current understanding of neuroanatomy and neurobiology, and has the ability to uncover additional patterns that have not yet been explored. PMID:26824078
SU-E-T-458: Determining Threshold-Of-Failure for Dead Pixel Rows in EPID-Based Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gersh, J; Wiant, D
Purpose: A pixel correction map is applied to all EPID-based applications on the TrueBeam (Varian Medical Systems, Palo Alto, CA). When dead pixels are detected, an interpolative smoothing algorithm is applied using neighboring-pixel information to supplement missing-pixel information. The vendor suggests that when the number of dead pixels exceeds 70,000, the panel should be replaced. It is common for entire detector rows to be dead, as well as their neighboring rows. Approximately 70 rows can be dead before the panel reaches this threshold. This study determines the number of neighboring dead-pixel rows that would create a large enough deviation inmore » measured fluence to cause failures in portal dosimetry (PD). Methods: Four clinical two-arc VMAT plans were generated using Eclipse's AXB algorithm and PD plans were created using the PDIP algorithm. These plans were chosen to represent those commonly encountered in the clinic: prostate, lung, abdomen, and neck treatments. During each iteration of this study, an increasing number of dead-pixel rows are artificially applied to the correction map and a fluence QA is performed using the EPID (corrected with this map). To provide a worst-case-scenario, the dead-pixel rows are chosen so that they present artifacts in the highfluence region of the field. Results: For all eight arc-fields deemed acceptable via a 3%/3mm gamma analysis (pass rate greater than 99%), VMAT QA yielded identical results with a 5 pixel-width dead zone. When 10 dead lines were present, half of the fields had pass rates below the 99% pass rate. With increasing dead rows, the pass rates were reduced substantially. Conclusion: While the vendor still suggests to request service at the point where 70,000 dead rows are measured (as recommended by the vendor), the authors suggest that service should be requested when there are greater than 5 consecutive dead rows.« less
NASA Astrophysics Data System (ADS)
Laifa, Oumeima; Le Guillou-Buffello, Delphine; Racoceanu, Daniel
2017-11-01
The fundamental role of vascular supply in tumor growth makes the evaluation of the angiogenesis crucial in assessing effect of anti-angiogenic therapies. Since many years, such therapies are designed to inhibit the vascular endothelial growth factor (VEGF). To contribute to the assessment of anti-angiogenic agent (Pazopanib) effect on vascular and cellular structures, we acquired data from tumors extracted from a murine tumor model using Multi- Fluorescence Scanning. In this paper, we implemented an unsupervised algorithm combining the Watershed segmentation and Markov Random Field model (MRF). This algorithm allowed us to quantify the proportion of apoptotic endothelial cells and to generate maps according to cell density. Stronger association between apoptosis and endothelial cells was revealed in the tumors receiving anti-angiogenic therapy (n = 4) as compared to those receiving placebo (n = 4). A high percentage of apoptotic cells in the tumor area are endothelial. Lower density cells were detected in tumor slices presenting higher apoptotic endothelial areas.
Landsat TM image maps of the Shirase and Siple Coast ice streams, West Antarctica
Ferrigno, Jane G.; Mullins, Jerry L.; Stapleton, Jo Anne; Bindschadler, Robert; Scambos, Ted A.; Bellisime, Lynda B.; Bowell, Jo-Ann; Acosta, Alex V.
1994-01-01
Fifteen 1: 250000 and one 1: 1000 000 scale Landsat Thematic Mapper (TM) image mosaic maps are currently being produced of the West Antarctic ice streams on the Shirase and Siple Coasts. Landsat TM images were acquired between 1984 and 1990 in an area bounded approximately by 78°-82.5°S and 120°- 160° W. Landsat TM bands 2, 3 and 4 were combined to produce a single band, thereby maximizing data content and improving the signal-to-noise ratio. The summed single band was processed with a combination of high- and low-pass filters to remove longitudinal striping and normalize solar elevation-angle effects. The images were mosaicked and transformed to a Lambert conformal conic projection using a cubic-convolution algorithm. The projection transformation was controled with ten weighted geodetic ground-control points and internal image-to-image pass points with annotation of major glaciological features. The image maps are being published in two formats: conventional printed map sheets and on a CD-ROM.
Greenberg, D; Istrail, S
1994-09-01
The Human Genome Project requires better software for the creation of physical maps of chromosomes. Current mapping techniques involve breaking large segments of DNA into smaller, more-manageable pieces, gathering information on all the small pieces, and then constructing a map of the original large piece from the information about the small pieces. Unfortunately, in the process of breaking up the DNA some information is lost and noise of various types is introduced; in particular, the order of the pieces is not preserved. Thus, the map maker must solve a combinatorial problem in order to reconstruct the map. Good software is indispensable for quick, accurate reconstruction. The reconstruction is complicated by various experimental errors. A major source of difficulty--which seems to be inherent to the recombination technology--is the presence of chimeric DNA clones. It is fairly common for two disjoint DNA pieces to form a chimera, i.e., a fusion of two pieces which appears as a single piece. Attempts to order chimera will fail unless they are algorithmically divided into their constituent pieces. Despite consensus within the genomic mapping community of the critical importance of correcting chimerism, algorithms for solving the chimeric clone problem have received only passing attention in the literature. Based on a model proposed by Lander (1992a, b) this paper presents the first algorithms for analyzing chimerism. We construct physical maps in the presence of chimerism by creating optimization functions which have minimizations which correlate with map quality. Despite the fact that these optimization functions are invariably NP-complete our algorithms are guaranteed to produce solutions which are close to the optimum. The practical import of using these algorithms depends on the strength of the correlation of the function to the map quality as well as on the accuracy of the approximations. We employ two fundamentally different optimization functions as a means of avoiding biases likely to decorrelate the solutions from the desired map. Experiments on simulated data show that both our algorithm which minimizes the number of chimeric fragments in a solution and our algorithm which minimizes the maximum number of fragments per clone in a solution do, in fact, correlate to high quality solutions. Furthermore, tests on simulated data using parameters set to mimic real experiments show that that the algorithms have the potential to find high quality solutions with real data. We plan to test our software against real data from the Whitehead Institute and from Los Alamos Genomic Research Center in the near future.
Rieger, Benedikt; Zimmer, Fabian; Zapp, Jascha; Weingärtner, Sebastian; Schad, Lothar R
2017-11-01
To develop an implementation of the magnetic resonance fingerprinting (MRF) paradigm for quantitative imaging using echo-planar imaging (EPI) for simultaneous assessment of T 1 and T2∗. The proposed MRF method (MRF-EPI) is based on the acquisition of 160 gradient-spoiled EPI images with rapid, parallel-imaging accelerated, Cartesian readout and a measurement time of 10 s per slice. Contrast variation is induced using an initial inversion pulse, and varying the flip angles, echo times, and repetition times throughout the sequence. Joint quantification of T 1 and T2∗ is performed using dictionary matching with integrated B1+ correction. The quantification accuracy of the method was validated in phantom scans and in vivo in 6 healthy subjects. Joint T 1 and T2∗ parameter maps acquired with MRF-EPI in phantoms are in good agreement with reference measurements, showing deviations under 5% and 4% for T 1 and T2∗, respectively. In vivo baseline images were visually free of artifacts. In vivo relaxation times are in good agreement with gold-standard techniques (deviation T 1 : 4 ± 2%, T2∗: 4 ± 5%). The visual quality was comparable to the in vivo gold standard, despite substantially shortened scan times. The proposed MRF-EPI method provides fast and accurate T 1 and T2∗ quantification. This approach offers a rapid supplement to the non-Cartesian MRF portfolio, with potentially increased usability and robustness. Magn Reson Med 78:1724-1733, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
González-Domínguez, Jorge; Remeseiro, Beatriz; Martín, María J
2017-02-01
The analysis of the interference patterns on the tear film lipid layer is a useful clinical test to diagnose dry eye syndrome. This task can be automated with a high degree of accuracy by means of the use of tear film maps. However, the time required by the existing applications to generate them prevents a wider acceptance of this method by medical experts. Multithreading has been previously successfully employed by the authors to accelerate the tear film map definition on multicore single-node machines. In this work, we propose a hybrid message-passing and multithreading parallel approach that further accelerates the generation of tear film maps by exploiting the computational capabilities of distributed-memory systems such as multicore clusters and supercomputers. The algorithm for drawing tear film maps is parallelized using Message Passing Interface (MPI) for inter-node communications and the multithreading support available in the C++11 standard for intra-node parallelization. The original algorithm is modified to reduce the communications and increase the scalability. The hybrid method has been tested on 32 nodes of an Intel cluster (with two 12-core Haswell 2680v3 processors per node) using 50 representative images. Results show that maximum runtime is reduced from almost two minutes using the previous only-multithreaded approach to less than ten seconds using the hybrid method. The hybrid MPI/multithreaded implementation can be used by medical experts to obtain tear film maps in only a few seconds, which will significantly accelerate and facilitate the diagnosis of the dry eye syndrome. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Jiang, Xunpeng; Yang, Zengling; Han, Lujia
2014-07-01
Contaminated meat and bone meal (MBM) in animal feedstuff has been the source of bovine spongiform encephalopathy (BSE) disease in cattle, leading to a ban in its use, so methods for its detection are essential. In this study, five pure feed and five pure MBM samples were used to prepare two sets of sample arrangements: set A for investigating the discrimination of individual feed/MBM particles and set B for larger numbers of overlapping particles. The two sets were used to test a Markov random field (MRF)-based approach. A Fourier transform infrared (FT-IR) imaging system was used for data acquisition. The spatial resolution of the near-infrared (NIR) spectroscopic image was 25 μm × 25 μm. Each spectrum was the average of 16 scans across the wavenumber range 7,000-4,000 cm(-1), at intervals of 8 cm(-1). This study introduces an innovative approach to analyzing NIR spectroscopic images: an MRF-based approach has been developed using the iterated conditional mode (ICM) algorithm, integrating initial labeling-derived results from support vector machine discriminant analysis (SVMDA) and observation data derived from the results of principal component analysis (PCA). The results showed that MBM covered by feed could be successfully recognized with an overall accuracy of 86.59% and a Kappa coefficient of 0.68. Compared with conventional methods, the MRF-based approach is capable of extracting spectral information combined with spatial information from NIR spectroscopic images. This new approach enhances the identification of MBM using NIR spectroscopic imaging.
NASA Technical Reports Server (NTRS)
Scheid, J. A.
1985-01-01
When both S-band and X-band data are recorded for a signal which has passed through the ionosphere, it is possible to calculate the ionospheric contribution to signal delay. In Very Long Baseline Interferometry (VLBI) this method is used to calibrate the ionosphere. In the absence of dual frequency data, the ionospheric content measured by Faraday rotation, using a signal from a geostationary satellite, is mapped to the VLBI observing direction. The purpose here is to compare the ionospheric delay obtained by these two methods. The principal conclusions are: (1) the correlation between delays obtained by these two methods is weak; (2) in mapping Faraday rotation measurements to the VLBI observing direction, a simple mapping algorithm which accounts only for changes in hour angle and elevation angle is better than a more elaborate algorithm which includes solar and geomagnetic effects; (3) fluctuations in the difference in total electron content as seen by two antennas defining a baseline limit the application of Faraday rotation data to VLBI.
Page layout analysis and classification for complex scanned documents
NASA Astrophysics Data System (ADS)
Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan
2011-09-01
A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.
The Edge-Disjoint Path Problem on Random Graphs by Message-Passing.
Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; De Bacco, Caterina; Franz, Silvio
2015-01-01
We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length.
The Edge-Disjoint Path Problem on Random Graphs by Message-Passing
2015-01-01
We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102
SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING
Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin
2018-01-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594
Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.
Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin
2017-07-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.
Molecular surface mesh generation by filtering electron density map.
Giard, Joachim; Macq, Benoît
2010-01-01
Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.
The Confluence of GIS, Cloud and Open Source, Enabling Big Raster Data Applications
NASA Astrophysics Data System (ADS)
Plesea, L.; Emmart, C. B.; Boller, R. A.; Becker, P.; Baynes, K.
2016-12-01
The rapid evolution of available cloud services is profoundly changing the way applications are being developed and used. Massive object stores, service scalability, continuous integration are some of the most important cloud technology advances that directly influence science applications and GIS. At the same time, more and more scientists are using GIS platforms in their day to day research. Yet with new opportunities there are always some challenges. Given the large amount of data commonly required in science applications, usually large raster datasets, connectivity is one of the biggest problems. Connectivity has two aspects, one being the limited bandwidth and latency of the communication link due to the geographical location of the resources, the other one being the interoperability and intrinsic efficiency of the interface protocol used to connect. NASA and Esri are actively helping each other and collaborating on a few open source projects, aiming to provide some of the core technology components to directly address the GIS enabled data connectivity problems. Last year Esri contributed LERC, a very fast and efficient compression algorithm to the GDAL/MRF format, which itself is a NASA/Esri collaboration project. The MRF raster format has some cloud aware features that make it possible to build high performance web services on cloud platforms, as some of the Esri projects demonstrate. Currently, another NASA open source project, the high performance OnEarth WMTS server is being refactored and enhanced to better integrate with MRF, GDAL and Esri software. Taken together, the GDAL, MRF and OnEarth form the core of an open source CloudGIS toolkit that is already showing results. Since it is well integrated with GDAL, which is the most common interoperability component of GIS applications, this approach should improve the connectivity and performance of many science and GIS applications in the cloud.
Deterministic magnetorheological finishing of optical aspheric mirrors
NASA Astrophysics Data System (ADS)
Song, Ci; Dai, Yifan; Peng, Xiaoqiang; Li, Shengyi; Shi, Feng
2009-05-01
A new method magnetorheological finishing (MRF) used for deterministical finishing of optical aspheric mirrors is applied to overcome some disadvantages including low finishing efficiency, long iterative time and unstable convergence in the process of conventional polishing. Based on the introduction of the basic principle of MRF, the key techniques to implement deterministical MRF are also discussed. To demonstrate it, a 200 mm diameter K9 class concave asphere with a vertex radius of 640mm was figured on MRF polish tool developed by ourselves. Through one process about two hours, the surface accuracy peak-to-valley (PV) is improved from initial 0.216λ to final 0.179λ and root-mean-square (RMS) is improved from 0.027λ to 0.017λ (λ = 0.6328um ). High-precision and high-efficiency convergence of optical aspheric surface error shows that MRF is an advanced optical manufacturing method that owns high convergence ratio of surface figure, high precision of optical surfacing, stabile and controllable finishing process. Therefore, utilizing MRF to finish optical aspheric mirrors determinately is credible and stabile; its advantages can be also used for finishing optical elements on varieties of types such as plane mirrors and spherical mirrors.
A two-level generative model for cloth representation and shape from shading.
Han, Feng; Zhu, Song-Chun
2007-07-01
In this paper, we present a two-level generative model for representing the images and surface depth maps of drapery and clothes. The upper level consists of a number of folds which will generate the high contrast (ridge) areas with a dictionary of shading primitives (for 2D images) and fold primitives (for 3D depth maps). These primitives are represented in parametric forms and are learned in a supervised learning phase using 3D surfaces of clothes acquired through photometric stereo. The lower level consists of the remaining flat areas which fill between the folds with a smoothness prior (Markov random field). We show that the classical ill-posed problem-shape from shading (SFS) can be much improved by this two-level model for its reduced dimensionality and incorporation of middle-level visual knowledge, i.e., the dictionary of primitives. Given an input image, we first infer the folds and compute a sketch graph using a sketch pursuit algorithm as in the primal sketch [10], [11]. The 3D folds are estimated by parameter fitting using the fold dictionary and they form the "skeleton" of the drapery/cloth surfaces. Then, the lower level is computed by conventional SFS method using the fold areas as boundary conditions. The two levels interact at the final stage by optimizing a joint Bayesian posterior probability on the depth map. We show a number of experiments which demonstrate more robust results in comparison with state-of-the-art work. In a broader scope, our representation can be viewed as a two-level inhomogeneous MRF model which is applicable to general shape-from-X problems. Our study is an attempt to revisit Marr's idea [23] of computing the 2(1/2)D sketch from primal sketch. In a companion paper [2], we study shape from stereo based on a similar two-level generative sketch representation.
NASA's Earth Imagery Service as Open Source Software
NASA Astrophysics Data System (ADS)
De Cesare, C.; Alarcon, C.; Huang, T.; Roberts, J. T.; Rodriguez, J.; Cechini, M. F.; Boller, R. A.; Baynes, K.
2016-12-01
The NASA Global Imagery Browse Service (GIBS) is a software system that provides access to an archive of historical and near-real-time Earth imagery from NASA-supported satellite instruments. The imagery itself is open data, and is accessible via standards such as the Open Geospatial Consortium (OGC)'s Web Map Tile Service (WMTS) protocol. GIBS includes three core software projects: The Imagery Exchange (TIE), OnEarth, and the Meta Raster Format (MRF) project. These projects are developed using a variety of open source software, including: Apache HTTPD, GDAL, Mapserver, Grails, Zookeeper, Eclipse, Maven, git, and Apache Commons. TIE has recently been released for open source, and is now available on GitHub. OnEarth, MRF, and their sub-projects have been on GitHub since 2014, and the MRF project in particular receives many external contributions from the community. Our software has been successful beyond the scope of GIBS: the PO.DAAC State of the Ocean and COVERAGE visualization projects reuse components from OnEarth. The MRF source code has recently been incorporated into GDAL, which is a core library in many widely-used GIS software such as QGIS and GeoServer. This presentation will describe the challenges faced in incorporating open software and open data into GIBS, and also showcase GIBS as a platform on which scientists and the general public can build their own applications.
Chen, Shaoshan; He, Deyu; Wu, Yi; Chen, Huangfei; Zhang, Zaijing; Chen, Yunlei
2016-10-01
A new non-aqueous and abrasive-free magnetorheological finishing (MRF) method is adopted for processing potassium dihydrogen phosphate (KDP) crystal due to its low hardness, high brittleness, temperature sensitivity, and water solubility. This paper researches the convergence rules of the surface error of an initial single-point diamond turning (SPDT)-finished KDP crystal after MRF polishing. Currently, the SPDT process contains spiral cutting and fly cutting. The main difference of these two processes lies in the morphology of intermediate-frequency turning marks on the surface, which affects the convergence rules. The turning marks after spiral cutting are a series of concentric circles, while the turning marks after fly cutting are a series of parallel big arcs. Polishing results indicate that MRF polishing can only improve the low-frequency errors (L>10 mm) of a spiral-cutting KDP crystal. MRF polishing can improve the full-range surface errors (L>0.01 mm) of a fly-cutting KDP crystal if the polishing process is not done more than two times for single surface. We can conclude a fly-cutting KDP crystal will meet better optical performance after MRF figuring than a spiral-cutting KDP crystal with similar initial surface performance.
Lee, Du-Hwa; Park, Seung Jun; Ahn, Chang Sook
2017-01-01
Dynamic control of protein translation in response to the environment is essential for the survival of plant cells. Target of rapamycin (TOR) coordinates protein synthesis with cellular energy/nutrient availability through transcriptional modulation and phosphorylation of the translation machinery. However, mechanisms of TOR-mediated translation control are poorly understood in plants. Here, we report that Arabidopsis thaliana MRF (MA3 DOMAIN-CONTAINING TRANSLATION REGULATORY FACTOR) family genes encode translation regulatory factors under TOR control, and their functions are particularly important in energy-deficient conditions. Four MRF family genes (MRF1-MRF4) are transcriptionally induced by dark and starvation (DS). Silencing of multiple MRFs increases susceptibility to DS and treatment with a TOR inhibitor, while MRF1 overexpression decreases susceptibility. MRF proteins interact with eIF4A and cofractionate with ribosomes. MRF silencing decreases translation activity, while MRF1 overexpression increases it, accompanied by altered ribosome patterns, particularly in DS. Furthermore, MRF deficiency in DS causes altered distribution of mRNAs in sucrose gradient fractions and accelerates rRNA degradation. MRF1 is phosphorylated in vivo and phosphorylated by S6 kinases in vitro. MRF expression and MRF1 ribosome association and phosphorylation are modulated by cellular energy status and TOR activity. We discuss possible mechanisms of the function of MRF family proteins under normal and energy-deficient conditions and their functional link with the TOR pathway. PMID:29084871
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
A Markov Random Field Framework for Protein Side-Chain Resonance Assignment
NASA Astrophysics Data System (ADS)
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
Nuclear magnetic resonance (NMR) spectroscopy plays a critical role in structural genomics, and serves as a primary tool for determining protein structures, dynamics and interactions in physiologically-relevant solution conditions. The current speed of protein structure determination via NMR is limited by the lengthy time required in resonance assignment, which maps spectral peaks to specific atoms and residues in the primary sequence. Although numerous algorithms have been developed to address the backbone resonance assignment problem [68,2,10,37,14,64,1,31,60], little work has been done to automate side-chain resonance assignment [43, 48, 5]. Most previous attempts in assigning side-chain resonances depend on a set of NMR experiments that record through-bond interactions with side-chain protons for each residue. Unfortunately, these NMR experiments have low sensitivity and limited performance on large proteins, which makes it difficult to obtain enough side-chain resonance assignments. On the other hand, it is essential to obtain almost all of the side-chain resonance assignments as a prerequisite for high-resolution structure determination. To overcome this deficiency, we present a novel side-chain resonance assignment algorithm based on alternative NMR experiments measuring through-space interactions between protons in the protein, which also provide crucial distance restraints and are normally required in high-resolution structure determination. We cast the side-chain resonance assignment problem into a Markov Random Field (MRF) framework, and extend and apply combinatorial protein design algorithms to compute the optimal solution that best interprets the NMR data. Our MRF framework captures the contact map information of the protein derived from NMR spectra, and exploits the structural information available from the backbone conformations determined by orientational restraints and a set of discretized side-chain conformations (i.e., rotamers). A Hausdorff-based computation is employed in the scoring function to evaluate the probability of side-chain resonance assignments to generate the observed NMR spectra. The complexity of the assignment problem is first reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is used to find a set of optimal side-chain resonance assignments that best fit the NMR data. We have tested our algorithm on NMR data for five proteins, including the FF Domain 2 of human transcription elongation factor CA150 (FF2), the B1 domain of Protein G (GB1), human ubiquitin, the ubiquitin-binding zinc finger domain of the human Y-family DNA polymerase Eta (pol η UBZ), and the human Set2-Rpb1 interacting domain (hSRI). Our algorithm assigns resonances for more than 90% of the protons in the proteins, and achieves about 80% correct side-chain resonance assignments. The final structures computed using distance restraints resulting from the set of assigned side-chain resonances have backbone RMSD 0.5 - 1.4 Å and all-heavy-atom RMSD 1.0 - 2.2 Å from the reference structures that were determined by X-ray crystallography or traditional NMR approaches. These results demonstrate that our algorithm can be successfully applied to automate side-chain resonance assignment and high-quality protein structure determination. Since our algorithm does not require any specific NMR experiments for measuring the through-bond interactions with side-chain protons, it can save a significant amount of both experimental cost and spectrometer time, and hence accelerate the NMR structure determination process.
Remote-sensing image encryption in hybrid domains
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong
2012-04-01
Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.
The serial message-passing schedule for LDPC decoding algorithms
NASA Astrophysics Data System (ADS)
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Single-image super-resolution based on Markov random field and contourlet transform
NASA Astrophysics Data System (ADS)
Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai
2011-04-01
Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.
Superpixel-based graph cuts for accurate stereo matching
NASA Astrophysics Data System (ADS)
Feng, Liting; Qin, Kaihuai
2017-06-01
Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gräfe, James; Khan, Rao; Meyer, Tyler
2014-08-15
In this study we investigate the deliverability of dosimetric plans generated by the irregular surface compensator (ISCOMP) algorithm for 6 MV photon beams in Eclipse (Varian Medical System, CA). In contrast to physical tissue compensation, the electronic ISCOMP uses MLCs to dynamically modulate the fluence of a photon beam in order to deliver a uniform dose at a user defined plane in tissue. This method can be used to shield critical organs that are located within the treatment portal or improve dose uniformity by tissue compensation in inhomogeneous regions. Three site specific plans and a set of test fields weremore » evaluated using the γ-metric of 3%/ 3 mm on Varian EPID, MapCHECK, and Gafchromic EBT3 film with a clinical tolerance of >95% passing rates. Point dose measurements with an NRCC calibrated ionization chamber were also performed to verify the absolute dose delivered. In all cases the MapCHECK measured plans met the gamma criteria. The mean passing rate for the six EBT3 film field measurements was 96.2%, with only two fields at 93.4 and 94.0% passing rates. The EPID plans passed for fields encompassing the central ∼10 × 10 cm{sup 2} region of the detector; however for larger fields and greater off-axis distances discrepancies were observed and attributed to the profile corrections and modeling of backscatter in the portal dose calculation. The magnitude of the average percentage difference for 21 ion chamber point dose measurements and 17 different fields was 1.4 ± 0.9%, and the maximum percentage difference was −3.3%. These measurements qualify the algorithm for routine clinical use subject to the same pre-treatment patient specific QA as IMRT.« less
Multiratio fusion change detection with adaptive thresholding
NASA Astrophysics Data System (ADS)
Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.
2017-04-01
A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.
NASA Astrophysics Data System (ADS)
Oware, E. K.
2017-12-01
Geophysical quantification of hydrogeological parameters typically involve limited noisy measurements coupled with inadequate understanding of the target phenomenon. Hence, a deterministic solution is unrealistic in light of the largely uncertain inputs. Stochastic imaging (SI), in contrast, provides multiple equiprobable realizations that enable probabilistic assessment of aquifer properties in a realistic manner. Generation of geologically realistic prior models is central to SI frameworks. Higher-order statistics for representing prior geological features in SI are, however, usually borrowed from training images (TIs), which may produce undesirable outcomes if the TIs are unpresentatitve of the target structures. The Markov random field (MRF)-based SI strategy provides a data-driven alternative to TI-based SI algorithms. In the MRF-based method, the simulation of spatial features is guided by Gibbs energy (GE) minimization. Local configurations with smaller GEs have higher likelihood of occurrence and vice versa. The parameters of the Gibbs distribution for computing the GE are estimated from the hydrogeophysical data, thereby enabling the generation of site-specific structures in the absence of reliable TIs. In Metropolis-like SI methods, the variance of the transition probability controls the jump-size. The procedure is a standard Markov chain Monte Carlo (McMC) method when a constant variance is assumed, and becomes simulated annealing (SA) when the variance (cooling temperature) is allowed to decrease gradually with time. We observe that in certain problems, the large variance typically employed at the beginning to hasten burn-in may be unideal for sampling at the equilibrium state. The powerfulness of SA stems from its flexibility to adaptively scale the variance at different stages of the sampling. Degeneration of results were reported in a previous implementation of the MRF-based SI strategy based on a constant variance. Here, we present an updated version of the algorithm based on SA that appears to resolve the degeneration problem with seemingly improved results. We illustrate the performance of the SA version with a joint inversion of time-lapse concentration and electrical resistivity measurements in a hypothetical trinary hydrofacies aquifer characterization problem.
NASA Astrophysics Data System (ADS)
Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul
2016-07-01
Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.
NASA Astrophysics Data System (ADS)
Xu, Ming; Huang, Li
2014-08-01
This paper addresses a new analytic algorithm for global coverage of the revisiting orbit and its application to the mission revisiting the Earth within long periods of time, such as Chinese-French Oceanic Satellite (abbr., CFOSAT). In the first, it is presented that the traditional design methodology of the revisiting orbit for some imaging satellites only on the single (ascending or descending) pass, and the repeating orbit is employed to perform the global coverage within short periods of time. However, the selection of the repeating orbit is essentially to yield the suboptimum from the rare measure of rational numbers of passes per day, which will lose lots of available revisiting orbits. Thus, an innovative design scheme is proposed to check both rational and irrational passes per day to acquire the relationship between the coverage percentage and the altitude. To improve the traditional imaging only on the single pass, the proposed algorithm is mapping every pass into its ascending and descending nodes on the specified latitude circle, and then is accumulating the projected width on the circle by the field of view of the satellite. The ergodic geometry of coverage percentage produced from the algorithm is affecting the final scheme, such as the optimal one owning the largest percentage, and the balance one possessing the less gradient in its vicinity, and is guiding to heuristic design for the station-keeping control strategies. The application of CFOSAT validates the feasibility of the algorithm.
A comparative analysis of passive twin tube and skyhook MRF dampers for motorcycle front suspensions
NASA Astrophysics Data System (ADS)
Ahmadian, Mehdi; Gravatt, John
2004-07-01
A comparative analysis between conventional passive twin tube dampers and skyhook-controlled magneto-rheological fluid (MRF) dampers for motorcycle front suspensions is provided, based on single axis testing in a damper test rig and suspension performance testing in road trials. Performance motorcycles, while boasting extremely light suspension components and competition-ready performance, have an inherent weakness in comfort, as the suspension systems are designed primarily for racing purposes. Front suspension acceleration and shock loading transmit directly through the front suspension triple clamp into the rider's arms and shoulders, causing rapid fatigue in shoulder muscles. Magneto-rheological fluid dampers and skyhook control systems offer an alternative to conventional sport motorcycle suspensions - both performance and comfort can be combined in the same package. Prototype MRF dampers designed and manufactured specifically for this application require no more space than conventional twin tube designs while adding only 1.7 pounds total weight to the system. The MRF dampers were designed for high controllability and low power consumption, two vital considerations for a motorcycle application. The tests conducted include the dampers' force-velocity curve testing in a damper test rig and suspension performance based on damper position, velocity, and acceleration measurement. Damper test rig results show the MRF dampers have a far greater range of adjustability than the test vehicle's OEM dampers. Combined with a modified sky-hook control system, the MRF dampers can greatly decrease the acceleration and shock loading transmitted to the rider through the handlebars while contributing performance in manners such as anti-dive under braking. Triple clamp acceleration measurements from a variety of staged road conditions, such as sinusoidal wave inputs, will be compared to subjective test-rider field reports to establish a correlation between rider fatigue and the front suspension performance. This testing will be conducted on the OEM vehicle suspension, the passive MRF dampers, and the skyhook-controlled MRF damper front suspension. The results of this test will determine the viability of skyhook-controlled MRF damper systems on motorcycles for performance gain and fatigue reduction.
NASA Astrophysics Data System (ADS)
Pan, J.; Durand, M. T.; Jiang, L.; Liu, D.
2017-12-01
The newly-processed NASA MEaSures Calibrated Enhanced-Resolution Brightness Temperature (CETB) reconstructed using antenna measurement response function (MRF) is considered to have significantly improved fine-resolution measurements with better georegistration for time-series observations and equivalent field of view (FOV) for frequencies with the same monomial spatial resolution. We are looking forward to its potential for the global snow observing purposes, and therefore aim to test its performance for characterizing snow properties, especially the snow water equivalent (SWE) in large areas. In this research, two candidate SWE algorithms will be tested in China for the years between 2005 to 2010 using the reprocessed TB from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), with the results to be evaluated using the daily snow depth measurements at over 700 national synoptic stations. One of the algorithms is the SWE retrieval algorithm used for the FengYun (FY) - 3 Microwave Radiation Imager. This algorithm uses the multi-channel TB to calculate SWE for three major snow regions in China, with the coefficients adapted for different land cover types. The second algorithm is the newly-established Bayesian Algorithm for SWE Estimation with Passive Microwave measurements (BASE-PM). This algorithm uses the physically-based snow radiative transfer model to find the histogram of most-likely snow property that matches the multi-frequency TB from 10.65 to 90 GHz. It provides a rough estimation of snow depth and grain size at the same time and showed a 30 mm SWE RMS error using the ground radiometer measurements at Sodankyla. This study will be the first attempt to test it spatially for satellite. The use of this algorithm benefits from the high resolution and the spatial consistency between frequencies embedded in the new dataset. This research will answer three questions. First, to what extent can CETB increase the heterogeneity in the mapped SWE? Second, will the SWE estimation error statistics be improved using this high-resolution dataset? Third, how will the SWE retrieval accuracy be improved using CETB and the new SWE retrieval techniques?
Phase unwrapping using region-based markov random field model.
Dong, Ying; Ji, Jim
2010-01-01
Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.
A novel image encryption algorithm based on chaos maps with Markov properties
NASA Astrophysics Data System (ADS)
Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang
2015-02-01
In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.
Research on reducing the edge effect in magnetorheological finishing.
Hu, Hao; Dai, Yifan; Peng, Xiaoqiang; Wang, Jianmin
2011-03-20
The edge effect could not be avoided in most optical manufacturing methods based on the theory of computer controlled optical surfacing. The difference between the removal function at the workpiece edge and that inside it is also the primary cause for edge effect in magnetorheological finishing (MRF). The change of physical dimension and removal ratio of the removal function is investigated through experiments. The results demonstrate that the situation is different when MRF "spot" is at the leading edge or at the trailing edge. Two methods for reducing the edge effect are put into practice after analysis of the processing results. One is adopting a small removal function for dealing with the workpiece edge, and the other is utilizing the removal function compensation. The actual processing results show that these two ways are both effective on reducing the edge effect in MRF.
NASA Astrophysics Data System (ADS)
Park, Byeolteo; Myung, Hyun
2014-12-01
With the development of unconventional gas, the technology of directional drilling has become more advanced. Underground localization is the key technique of directional drilling for real-time path following and system control. However, there are problems such as vibration, disconnection with external infrastructure, and magnetic field distortion. Conventional methods cannot solve these problems in real time or in various environments. In this paper, a novel underground localization algorithm using a re-measurement of the sequence of the magnetic field and pose graph SLAM (simultaneous localization and mapping) is introduced. The proposed algorithm exploits the property of the drilling system that the body passes through the previous pass. By comparing the recorded measurement from one magnetic sensor and the current re-measurement from another magnetic sensor, the proposed algorithm predicts the pose of the drilling system. The performance of the algorithm is validated through simulations and experiments.
Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco
2002-01-01
The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...
Belt-MRF for large aperture mirrors.
Ren, Kai; Luo, Xiao; Zheng, Ligong; Bai, Yang; Li, Longxiang; Hu, Haixiang; Zhang, Xuejun
2014-08-11
With high-determinacy and no subsurface damage, Magnetorheological Finishing (MRF) has become an important tool in fabricating high-precision optics. But for large mirrors, the application of MRF is restricted by its small removal function and low material removal rate. In order to improve the material removal rate, shorten the processing cycle, we proposed a new MRF concept, named Belt-MRF to expand the application of MRF to large mirrors and made a prototype with a large remove function, using a belt instead of a very large polishing wheel to expand the polishing length. A series of experimental results on Silicon carbide (SiC) and BK 7 specimens and fabrication simulation verified that the Belt-MRF has high material removal rates, stable removal function and high convergence efficiency which makes it a promising technology for processing large aperture optical elements.
Moment rate scaling for earthquakes 3.3 ≤ M ≤ 5.3 with implications for stress drop
NASA Astrophysics Data System (ADS)
Archuleta, Ralph J.; Ji, Chen
2016-12-01
We have determined a scalable apparent moment rate function (aMRF) that correctly predicts the peak ground acceleration (PGA), peak ground velocity (PGV), local magnitude, and the ratio of PGA/PGV for earthquakes 3.3 ≤ M ≤ 5.3. Using the NGA-West2 database for 3.0 ≤ M ≤ 7.7, we find a break in scaling of LogPGA and LogPGV versus M around M 5.3 with nearly linear scaling for LogPGA and LogPGV for 3.3 ≤ M ≤ 5.3. Temporal parameters tp and td—related to rise time and total duration—control the aMRF. Both scale with seismic moment. The Fourier amplitude spectrum of the aMRF has two corners between which the spectrum decays f- 1. Significant attenuation along the raypath results in a Brune-like spectrum with one corner fC. Assuming that fC ≅ 1/td, the aMRF predicts non-self-similar scaling M0∝fC3.3 and weak stress drop scaling Δσ∝M00.091. This aMRF can explain why stress drop is different from the stress parameter used to predict high-frequency ground motion.
Quantitative analysis of multiple sclerosis: a feasibility study
NASA Astrophysics Data System (ADS)
Li, Lihong; Li, Xiang; Wei, Xinzhou; Sturm, Deborah; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Multiple Sclerosis (MS) is an inflammatory and demyelinating disorder of the central nervous system with a presumed immune-mediated etiology. For treatment of MS, the measurements of white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) are often used in conjunction with clinical evaluation to provide a more objective measure of MS burden. In this paper, we apply a new unifying automatic mixture-based algorithm for segmentation of brain tissues to quantitatively analyze MS. The method takes into account the following effects that commonly appear in MR imaging: 1) The MR data is modeled as a stochastic process with an inherent inhomogeneity effect of smoothly varying intensity; 2) A new partial volume (PV) model is built in establishing the maximum a posterior (MAP) segmentation scheme; 3) Noise artifacts are minimized by a priori Markov random field (MRF) penalty indicating neighborhood correlation from tissue mixture. The volumes of brain tissues (WM, GM) and CSF are extracted from the mixture-based segmentation. Experimental results of feasibility studies on quantitative analysis of MS are presented.
Aravamuthan, Bhooma R.; Angelaki, Dora E.
2012-01-01
The pedunculopontine nucleus (PPN) and central mesencephalic reticular formation (cMRF) both send projections and receive input from areas with known vestibular responses. Noting their connections with the basal ganglia, the locomotor disturbances that occur following lesions of the PPN or cMRF, and the encouraging results of PPN deep brain stimulation in Parkinson’s disease patients, both the PPN and cMRF have been linked to motor control. In order to determine the existence of and characterize vestibular responses in the PPN and cMRF, we recorded single neurons from both structures during vertical and horizontal rotation, translation, and visual pursuit stimuli. The majority of PPN cells (72.5%) were vestibular-only cells that responded exclusively to rotation and translation stimuli but not visual pursuit. Visual pursuit responses were much more prevalent in the cMRF (57.1%) though close to half of cMRF cells were vestibular-only cells (41.1%). Directional preferences also differed between the PPN, which was preferentially modulated during nose-down pitch, and cMRF, which was preferentially modulated during ipsilateral yaw rotation. Finally, amplitude responses were similar between the PPN and cMRF during rotation and pursuit stimuli, but PPN responses to translation were of higher amplitude than cMRF responses. Taken together with their connections to the vestibular circuit, these results implicate the PPN and cMRF in the processing of vestibular stimuli and suggest important roles for both in responding to motion perturbations like falls and turns. PMID:22864184
A novel method for measurement of MR fluid sedimentation and its experimental verification
NASA Astrophysics Data System (ADS)
Roupec, J.; Berka, P.; Mazůrek, I.; Strecker, Z.; Kubík, M.; Macháček, O.; Taheri Andani, M.
2017-10-01
This article presents a novel sedimentation measurement technique based on quantifying the changes in magnetic flux density when the magnetorheological fluid (MRF) passes through the air gap of the magnetic circuit. The sedimented MRF appears to have as a result of increased iron content. Accordingly, the sedimented portion of the sample displays a higher magnetic conductivity than the unsedimented area that contains less iron particles. The data analysis and evaluation methodology is elaborated along with an example set of measurements, which are compared against the visual observations and available data in the literature. Experiments indicate that unlike the existing methods, the new technique is able to accurately generate the complete curves of the sedimentation profile in a long-term sedimentation. The proposed method is capable of successfully detecting the area with the tightest particle configuration near the bottom (‘cake’ layer). It also addresses the issues with the development of an unclear boundary between the carrier fluid and the sediment (mudline) during an accelerated sedimentation process; improves the sensitivity of the sedimentation detection and accurately measure the changes in particle concentration with a high resolution.
Six-month Longitudinal Comparison of a Portable Tablet Perimeter With the Humphrey Field Analyzer.
Prea, Selwyn Marc; Kong, Yu Xiang George; Mehta, Aditi; He, Mingguang; Crowston, Jonathan G; Gupta, Vinay; Martin, Keith R; Vingrys, Algis J
2018-06-01
To establish the medium-term repeatability of the iPad perimetry app Melbourne Rapid Fields (MRF) compared to Humphrey Field Analyzer (HFA) 24-2 SITA-standard and SITA-fast programs. Multicenter longitudinal observational clinical study. Sixty patients (stable glaucoma/ocular hypertension/glaucoma suspects) were recruited into a 6-month longitudinal clinical study with visits planned at baseline and at 2, 4, and 6 months. At each visit patients undertook visual field assessment using the MRF perimetry application and either HFA SITA-fast (n = 21) or SITA-standard (n = 39). The primary outcome measure was the association and repeatability of mean deviation (MD) for the MRF and HFA tests. Secondary measures were the point-wise threshold and repeatability for each test, as well as test time. MRF was similar to SITA-fast in speed and significantly faster than SITA-standard (MRF 4.6 ± 0.1 minutes vs SITA-fast 4.3 ± 0.2 minutes vs SITA-standard 6.2 ± 0.1 minutes, P < .001). Intraclass correlation coefficients (ICC) between MRF and SITA-fast for MD at the 4 visits ranged from 0.71 to 0.88. ICC values between MRF and SITA-standard for MD ranged from 0.81 to 0.90. Repeatability of MRF MD outcomes was excellent, with ICC for baseline and the 6-month visit being 0.98 (95% confidence interval: 0.96-0.99). In comparison, ICC at 6-month retest for SITA-fast was 0.95 and SITA-standard 0.93. Fewer points changed with the MRF, although for those that did, the MRF gave greater point-wise variability than did the SITA tests. MRF correlated strongly with HFA across 4 visits over a 6-month period, and has good test-retest reliability. MRF is suitable for monitoring visual fields in settings where conventional perimetry is not readily accessible. Copyright © 2018 Elsevier Inc. All rights reserved.
Vermeiren, Angelique P A; Bosma, Hans; Gielen, Marij; Lindsey, Patrick J; Derom, Catherine; Vlietinck, Robert; Loos, Ruth J F; Zeegers, Maurice P
2013-12-01
Lower educated people have a higher prevalence of metabolic risk factors (MRF), that is, high waist circumference (WC), high systolic blood pressure, low high-density lipoprotein cholesterol level, high triglycerides and high fasting glucose levels. Behavioural and psychosocial factors cannot fully explain this educational gradient. We aim to examine the possible role of genetic factors by estimating the extent to which education and MRF share a genetic basis and the extent to which the heritability of MRF varies across educational levels. We examined 388 twin pairs, aged 18-34 years, from the Belgian East Flanders Prospective Twin Survey. Using structural equation modelling, a Cholesky bivariate model was applied to assess the shared genetic basis between education and MRF. The heritability of MRF across education levels was estimated using a non-linear multivariate Gaussian regression. Fifteen percent (P < 0.01) of the negative relation between education and WC was because of genes shared between these two traits. Furthermore, the heritability of WC was lower in the lowest educated group (65%) compared with the highest educated group (78%, P = 0.04). The lower heritabilities among the lower educated twins for the other MRF were not significant. The heritability of glucose was higher in the lowest education (80%) group compared with the high education group (67%, P = 0.01). Our findings suggest that genetic factors partly explain educational differences in WC. Furthermore, the lower heritability estimates in WC in the lower educated young adults suggest opportunities for environmental interventions to prevent the development of full-blown metabolic syndrome in middle and older age.
Research on the magnetorheological finishing (MRF) technology with dual polishing heads
NASA Astrophysics Data System (ADS)
Huang, Wen; Zhang, Yunfei; He, Jianguo; Zheng, Yongcheng; Luo, Qing; Hou, Jing; Yuan, Zhigang
2014-08-01
Magnetorheological finishing (MRF) is a key polishing technique capable of rapidly converging to the required surface figure. Due to the deficiency of general one-polishing-head MRF technology, a dual polishing heads MRF technology was studied and a dual polishing heads MRF machine with 8 axes was developed. The machine has the ability to manufacture large aperture optics with high figure accuracy. The large polishing head is suitable for polishing large aperture optics, controlling large spatial length's wave structures, correcting low-medium frequency errors with high removal rates. While the small polishing head has more advantages in manufacturing small aperture optics, controlling small spatial wavelength's wave structures, correcting mid-high frequency and removing nanoscale materials. Material removal characteristic and figure correction ability for each of large and small polishing head was studied. Each of two polishing heads respectively acquired stable and valid polishing removal function and ultra-precision flat sample. After a single polishing iteration using small polishing head, the figure error in 45mm diameter of a 50 mm diameter plano optics was significantly improved from 0.21λ to 0.08λ by PV (RMS 0.053λ to 0.015λ). After three polishing iterations using large polishing head , the figure error in 410mm×410mm of a 430mm×430mm large plano optics was significantly improved from 0.40λ to 0.10λ by PV (RMS 0.068λ to 0.013λ) .This results show that the dual polishing heads MRF machine not only have good material removal stability, but also excellent figure correction capability.
Magnetorheological finishing for removing surface and subsurface defects of fused silica optics
NASA Astrophysics Data System (ADS)
Catrin, Rodolphe; Neauport, Jerome; Taroux, Daniel; Cormont, Philippe; Maunier, Cedric; Lambert, Sebastien
2014-09-01
We investigate the capacity of magnetorheological finishing (MRF) process to remove surface and subsurface defects of fused silica optics. Polished samples with engineered surface and subsurface defects were manufactured and characterized. Uniform material removals were performed with a QED Q22-XE machine using different MRF process parameters in order to remove these defects. We provide evidence that whatever the MRF process parameters are, MRF is able to remove surface and subsurface defects. Moreover, we show that MRF induces a pollution of the glass interface similar to conventional polishing processes.
Report to Congress on the Activities of the DoD Office of Technology Transition
2001-02-01
known as Magnetorheological Finishing (MRF), that provides significant cost savings in the manufacture of precision optical surfaces. Compared to...The programs included: - The Army’s Advanced Optics Manufacturing program developed a multi- axis, computer-controlled optical finishing technology...percent. The MRF finishing machine is commercially available, and has received industry-wide acclaim, winning two of the optical industry’s most
Aravamuthan, B R; Angelaki, D E
2012-10-25
The pedunculopontine nucleus (PPN) and central mesencephalic reticular formation (cMRF) both send projections and receive input from areas with known vestibular responses. Noting their connections with the basal ganglia, the locomotor disturbances that occur following lesions of the PPN or cMRF, and the encouraging results of PPN deep brain stimulation in Parkinson's disease patients, both the PPN and cMRF have been linked to motor control. In order to determine the existence of and characterize vestibular responses in the PPN and cMRF, we recorded single neurons from both structures during vertical and horizontal rotation, translation, and visual pursuit stimuli. The majority of PPN cells (72.5%) were vestibular-only (VO) cells that responded exclusively to rotation and translation stimuli but not visual pursuit. Visual pursuit responses were much more prevalent in the cMRF (57.1%) though close to half of cMRF cells were VO cells (41.1%). Directional preferences also differed between the PPN, which was preferentially modulated during nose-down pitch, and cMRF, which was preferentially modulated during ipsilateral yaw rotation. Finally, amplitude responses were similar between the PPN and cMRF during rotation and pursuit stimuli, but PPN responses to translation were of higher amplitude than cMRF responses. Taken together with their connections to the vestibular circuit, these results implicate the PPN and cMRF in the processing of vestibular stimuli and suggest important roles for both in responding to motion perturbations like falls and turns. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Wang, Niping; Perkins, Eddie; Zhou, Lan; Warren, Susan; May, Paul J
2013-10-09
Omnipause neurons (OPNs) within the nucleus raphe interpositus (RIP) help gate the transition between fixation and saccadic eye movements by monosynaptically suppressing activity in premotor burst neurons during fixation, and releasing them during saccades. Premotor neuron activity is initiated by excitatory input from the superior colliculus (SC), but how the tectum's saccade-related activity turns off OPNs is not known. Since the central mesencephalic reticular formation (cMRF) is a major SC target, we explored whether this nucleus has the appropriate connections to support tectal gating of OPN activity. In dual-tracer experiments undertaken in macaque monkeys (Macaca fascicularis), cMRF neurons labeled retrogradely from injections into RIP had numerous anterogradely labeled terminals closely associated with them following SC injections. This suggested the presence of an SC-cMRF-RIP pathway. Furthermore, anterograde tracers injected into the cMRF of other macaques labeled axonal terminals in RIP, confirming this cMRF projection. To determine whether the cMRF projections gate OPN activity, postembedding electron microscopic immunochemistry was performed on anterogradely labeled cMRF terminals with antibody to GABA or glycine. Of the terminals analyzed, 51.4% were GABA positive, 35.5% were GABA negative, and most contacted glycinergic cells. In summary, a trans-cMRF pathway connecting the SC to the RIP is present. This pathway contains inhibitory elements that could help gate omnipause activity and allow other tectal drives to induce the bursts of firing in premotor neurons that are necessary for saccades. The non-GABAergic cMRF terminals may derive from fixation units in the cMRF.
Izzi, Stephanie A; Colantuono, Bonnie J; Sullivan, Kelly; Khare, Parul; Meedel, Thomas H
2013-04-15
Ci-MRF is the sole myogenic regulatory factor (MRF) of the ascidian Ciona intestinalis, an invertebrate chordate. In order to investigate its properties we developed a simple in vivo assay based on misexpressing Ci-MRF in the notochord of Ciona embryos. We used this assay to examine the roles of three structural motifs that are conserved among MRFs: an alanine-threonine (Ala-Thr) dipeptide of the basic domain that is known in vertebrates as the myogenic code, a cysteine/histidine-rich (C/H) domain found just N-terminal to the basic domain, and a carboxy-terminal amphipathic α-helix referred to as Helix III. We show that the Ala-Thr dipeptide is necessary for normal Ci-MRF function, and that while eliminating the C/H domain or Helix III individually has no demonstrable effect on Ci-MRF, simultaneous loss of both motifs significantly reduces its activity. Our studies also indicate that direct interaction between CiMRF and an essential E-box of Ciona Troponin I is required for the expression of this muscle-specific gene and that multiple classes of MRF-regulated genes exist in Ciona. These findings are consistent with substantial conservation of MRF-directed myogenesis in chordates and demonstrate for the first time that the Ala/Thr dipeptide of the basic domain of an invertebrate MRF behaves as a myogenic code. Copyright © 2013 Elsevier Inc. All rights reserved.
MR Fingerprinting Using The Quick Echo Splitting NMR Imaging Technique
Jiang, Yun; Ma, Dan; Jerecic, Renate; Duerk, Jeffrey; Seiberlich, Nicole; Gulani, Vikas; Griswold, Mark A.
2016-01-01
Purpose The purpose of the study is to develop a quantitative method for the relaxation properties with a reduced radio frequency (RF) power deposition by combining Magnetic Resonance Fingerprinting (MRF) technique with Quick Echo Splitting NMR Imaging Technique (QUEST). Methods A QUEST-based MRF sequence was implemented to acquire high order echoes by increasing the gaps between RF pulses. Bloch simulations were used to calculate a dictionary containing the range of physically plausible signal evolutions using a range of T1 and T2 values based on the pulse sequence. MRF-QUEST was evaluated by comparing to the results of spin-echo methods. The SAR of QUEST-MRF was compared to the clinically available methods. Results MRF-QUEST quantifies the relaxation properties with good accuracy at the estimated head Specific Absorption Rate (SAR) of 0.03 W/kg. T1 and T2 values estimated by MRF-QUEST are in good agreement with the traditional methods. Conclusion The combination of the MRF and the QUEST provides an accurate quantification of T1 and T2 simultaneously with reduced RF power deposition. The resulting lower SAR may provide a new acquisition strategy for MRF when RF energy deposition is problematic. PMID:26924639
RSEQtools: a modular framework to analyze RNA-Seq data using compact, anonymized data summaries.
Habegger, Lukas; Sboner, Andrea; Gianoulis, Tara A; Rozowsky, Joel; Agarwal, Ashish; Snyder, Michael; Gerstein, Mark
2011-01-15
The advent of next-generation sequencing for functional genomics has given rise to quantities of sequence information that are often so large that they are difficult to handle. Moreover, sequence reads from a specific individual can contain sufficient information to potentially identify and genetically characterize that person, raising privacy concerns. In order to address these issues, we have developed the Mapped Read Format (MRF), a compact data summary format for both short and long read alignments that enables the anonymization of confidential sequence information, while allowing one to still carry out many functional genomics studies. We have developed a suite of tools (RSEQtools) that use this format for the analysis of RNA-Seq experiments. These tools consist of a set of modules that perform common tasks such as calculating gene expression values, generating signal tracks of mapped reads and segmenting that signal into actively transcribed regions. Moreover, the tools can readily be used to build customizable RNA-Seq workflows. In addition to the anonymization afforded by MRF, this format also facilitates the decoupling of the alignment of reads from downstream analyses. RSEQtools is implemented in C and the source code is available at http://rseqtools.gersteinlab.org/.
Method and system for processing optical elements using magnetorheological finishing
Menapace, Joseph Arthur; Schaffers, Kathleen Irene; Bayramian, Andrew James; Molander, William A
2012-09-18
A method of finishing an optical element includes mounting the optical element in an optical mount having a plurality of fiducials overlapping with the optical element and obtaining a first metrology map for the optical element and the plurality of fiducials. The method also includes obtaining a second metrology map for the optical element without the plurality of fiducials, forming a difference map between the first metrology map and the second metrology map, and aligning the first metrology map and the second metrology map. The method further includes placing mathematical fiducials onto the second metrology map using the difference map to form a third metrology map and associating the third metrology map to the optical element. Moreover, the method includes mounting the optical element in the fixture in an MRF tool, positioning the optical element in the fixture; removing the plurality of fiducials, and finishing the optical element.
Wang, Charlie Y; Liu, Yuchi; Huang, Shuying; Griswold, Mark A; Seiberlich, Nicole; Yu, Xin
2017-12-01
The purpose of this work was to develop a 31 P spectroscopic magnetic resonance fingerprinting (MRF) method for fast quantification of the chemical exchange rate between phosphocreatine (PCr) and adenosine triphosphate (ATP) via creatine kinase (CK). A 31 P MRF sequence (CK-MRF) was developed to quantify the forward rate constant of ATP synthesis via CK ( kfCK), the T 1 relaxation time of PCr ( T1PCr), and the PCr-to-ATP concentration ratio ( MRPCr). The CK-MRF sequence used a balanced steady-state free precession (bSSFP)-type excitation with ramped flip angles and a unique saturation scheme sensitive to the exchange between PCr and γATP. Parameter estimation was accomplished by matching the acquired signals to a dictionary generated using the Bloch-McConnell equation. Simulation studies were performed to examine the susceptibility of the CK-MRF method to several potential error sources. The accuracy of nonlocalized CK-MRF measurements before and after an ischemia-reperfusion (IR) protocol was compared with the magnetization transfer (MT-MRS) method in rat hindlimb at 9.4 T (n = 14). The reproducibility of CK-MRF was also assessed by comparing CK-MRF measurements with both MT-MRS (n = 17) and four angle saturation transfer (FAST) (n = 7). Simulation results showed that CK-MRF quantification of kfCK was robust, with less than 5% error in the presence of model inaccuracies including dictionary resolution, metabolite T 2 values, inorganic phosphate metabolism, and B 1 miscalibration. Estimation of kfCK by CK-MRF (0.38 ± 0.02 s -1 at baseline and 0.42 ± 0.03 s -1 post-IR) showed strong agreement with MT-MRS (0.39 ± 0.03 s -1 at baseline and 0.44 ± 0.04 s -1 post-IR). kfCK estimation was also similar between CK-MRF and FAST (0.38 ± 0.02 s -1 for CK-MRF and 0.38 ± 0.11 s -1 for FAST). The coefficient of variation from 20 s CK-MRF quantification of kfCK was 42% of that by 150 s MT-MRS acquisition and was 12% of that by 20 s FAST acquisition. This study demonstrates the potential of a 31 P spectroscopic MRF framework for rapid, accurate and reproducible quantification of chemical exchange rate of CK in vivo. Copyright © 2017 John Wiley & Sons, Ltd.
Comparison of Genetic Algorithm and Hill Climbing for Shortest Path Optimization Mapping
NASA Astrophysics Data System (ADS)
Fronita, Mona; Gernowo, Rahmat; Gunawan, Vincencius
2018-02-01
Traveling Salesman Problem (TSP) is an optimization to find the shortest path to reach several destinations in one trip without passing through the same city and back again to the early departure city, the process is applied to the delivery systems. This comparison is done using two methods, namely optimization genetic algorithm and hill climbing. Hill Climbing works by directly selecting a new path that is exchanged with the neighbour's to get the track distance smaller than the previous track, without testing. Genetic algorithms depend on the input parameters, they are the number of population, the probability of crossover, mutation probability and the number of generations. To simplify the process of determining the shortest path supported by the development of software that uses the google map API. Tests carried out as much as 20 times with the number of city 8, 16, 24 and 32 to see which method is optimal in terms of distance and time computation. Based on experiments conducted with a number of cities 3, 4, 5 and 6 producing the same value and optimal distance for the genetic algorithm and hill climbing, the value of this distance begins to differ with the number of city 7. The overall results shows that these tests, hill climbing are more optimal to number of small cities and the number of cities over 30 optimized using genetic algorithms.
Wang, Niping; Perkins, Eddie; Zhou, Lan; Warren, Susan; May, Paul J
2017-01-01
The central mesencephalic reticular formation (cMRF) occupies much of the core of the midbrain tegmentum. Physiological studies indicate that it is involved in controlling gaze changes, particularly horizontal saccades. Anatomically, it receives input from the ipsilateral superior colliculus (SC) and it has downstream projections to the brainstem, including the horizontal gaze center located in the paramedian pontine reticular formation (PPRF). Consequently, it has been hypothesized that the cMRF plays a role in the spatiotemporal transformation needed to convert spatially coded collicular saccade signals into the temporally coded signals utilized by the premotor neurons of the horizontal gaze center. In this study, we used neuroanatomical tracers to examine the patterns of connectivity of the cMRF in macaque monkeys in order to determine whether the circuit organization supports this hypothesis. Since stimulation of the cMRF produces contraversive horizontal saccades and stimulation of the horizontal gaze center produces ipsiversive saccades, this would require an excitatory cMRF projection to the contralateral PPRF. Injections of anterograde tracers into the cMRF did produce labeled terminals within the PPRF. However, the terminations were denser ipsilaterally. Since the PPRF located contralateral to the movement direction is generally considered to be silent during a horizontal saccade, we then tested the hypothesis that this ipsilateral reticuloreticular pathway might be inhibitory. The ultrastructure of ipsilateral terminals was heterogeneous, with some displaying more extensive postsynaptic densities than others. Postembedding immunohistochemistry for gamma-aminobutyric acid (GABA) indicated that only a portion (35%) of these cMRF terminals are GABAergic. Dual tracer experiments were undertaken to determine whether the SC provides input to cMRF reticuloreticular neurons projecting to the ipsilateral pons. Retrogradely labeled reticuloreticular neurons were predominantly distributed in the ipsilateral cMRF. Anterogradely labeled tectal terminals were observed in close association with a portion of these retrogradely labeled reticuloreticular neurons. Taken together, these results suggest that the SC does have connections with reticuloreticular neurons in the cMRF. However, the predominantly excitatory nature of the ipsilateral reticuloreticular projection argues against the hypothesis that this cMRF pathway is solely responsible for producing a spatiotemporal transformation of the collicular saccade signal.
MR fingerprinting Deep RecOnstruction NEtwork (DRONE).
Cohen, Ouri; Zhu, Bo; Rosen, Matthew S
2018-09-01
Demonstrate a novel fast method for reconstruction of multi-dimensional MR fingerprinting (MRF) data using deep learning methods. A neural network (NN) is defined using the TensorFlow framework and trained on simulated MRF data computed with the extended phase graph formalism. The NN reconstruction accuracy for noiseless and noisy data is compared to conventional MRF template matching as a function of training data size and is quantified in simulated numerical brain phantom data and International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom data measured on 1.5T and 3T scanners with an optimized MRF EPI and MRF fast imaging with steady state precession (FISP) sequences with spiral readout. The utility of the method is demonstrated in a healthy subject in vivo at 1.5T. Network training required 10 to 74 minutes; once trained, data reconstruction required approximately 10 ms for the MRF EPI and 76 ms for the MRF FISP sequence. Reconstruction of simulated, noiseless brain data using the NN resulted in a RMS error (RMSE) of 2.6 ms for T 1 and 1.9 ms for T 2 . The reconstruction error in the presence of noise was less than 10% for both T 1 and T 2 for SNR greater than 25 dB. Phantom measurements yielded good agreement (R 2 = 0.99/0.99 for MRF EPI T 1 /T 2 and 0.94/0.98 for MRF FISP T 1 /T 2 ) between the T 1 and T 2 estimated by the NN and reference values from the International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom. Reconstruction of MRF data with a NN is accurate, 300- to 5000-fold faster, and more robust to noise and dictionary undersampling than conventional MRF dictionary-matching. © 2018 International Society for Magnetic Resonance in Medicine.
The macaque midbrain reticular formation sends side-specific feedback to the superior colliculus.
Wang, Niping; Warren, Susan; May, Paul J
2010-04-01
The central mesencephalic reticular formation (cMRF) likely plays a role in gaze control, as cMRF neurons receive tectal input and provide a bilateral projection back to the superior colliculus (SC). We examined the important question of whether this feedback is excitatory or inhibitory. Biotinylated dextran amine (BDA) was injected into the cMRF of M. fascicularis monkeys to anterogradely label reticulotectal terminals and retrogradely label tectoreticular neurons. BDA labeled profiles in the ipsi- and contralateral intermediate gray layer (SGI) were examined electron microscopically. Postembedding GABA immunochemistry was used to identify putative inhibitory profiles. Nearly all (94.7%) of the ipsilateral BDA labeled terminals were GABA positive, but profiles postsynaptic to these labeled terminals were exclusively GABA negative. In addition, BDA labeled terminals were observed to contact BDA labeled dendrites, indicating the presence of a monosynaptic feedback loop connecting the cMRF and ipsilateral SC. In contrast, within the contralateral SGI, half of the BDA labeled terminals were GABA positive, while more than a third were GABA negative. All the postsynaptic profiles were GABA negative. These results indicate the cMRF provides inhibitory feedback to the ipsilateral side of the SC, but it has more complex effects on the contralateral side. The ipsilateral projection may help tune the "winner-take-all" mechanism that produces a unified saccade signal, while the contralateral projections may contribute to the coordination of activity between the two colliculi.
Leiras, Roberto; Martín-Cora, Francisco; Velo, Patricia; Liste, Tania
2015-01-01
Animals and human beings sense and react to real/potential dangerous stimuli. However, the supraspinal mechanisms relating noxious sensing and nocifensive behavior are mostly unknown. The collateralization and spatial organization of interrelated neurons are important determinants of coordinated network function. Here we electrophysiologically studied medial medullary reticulospinal neurons (mMRF-RSNs) antidromically identified from the cervical cord of anesthetized cats and found that 1) more than 40% (79/183) of the sampled mMRF-RSNs emitted bifurcating axons running within the dorsolateral (DLF) and ventromedial (VMF) ipsilateral fascicles; 2) more than 50% (78/151) of the tested mMRF-RSNs with axons running in the VMF collateralized to the subnucleus reticularis dorsalis (SRD) that also sent ipsilateral descending fibers bifurcating within the DLF and the VMF. This percentage of mMRF collateralization to the SRD increased to more than 81% (53/65) when considering the subpopulation of mMRF-RSNs responsive to noxiously heating the skin; 3) reciprocal monosynaptic excitatory relationships were electrophysiologically demonstrated between noxious sensitive mMRF-RSNs and SRD cells; and 4) injection of the anterograde tracer Phaseolus vulgaris leucoagglutinin evidenced mMRF to SRD and SRD to mMRF projections contacting the soma and proximal dendrites. The data demonstrated a SRD-mMRF network interconnected mainly through collaterals of descending axons running within the VMF, with the subset of noxious sensitive cells forming a reverberating circuit probably amplifying mutual outputs simultaneously regulating motor activity and spinal noxious afferent input. The results provide evidence that noxious stimulation positively engages a reticular SRD-mMRF-SRD network involved in pain-sensory-to-motor transformation and modulation. PMID:26581870
Cosmic string detection with tree-based machine learning
NASA Astrophysics Data System (ADS)
Vafaei Sadr, A.; Farhang, M.; Movahed, S. M. S.; Bassett, B.; Kunz, M.
2018-07-01
We explore the use of random forest and gradient boosting, two powerful tree-based machine learning algorithms, for the detection of cosmic strings in maps of the cosmic microwave background (CMB), through their unique Gott-Kaiser-Stebbins effect on the temperature anisotropies. The information in the maps is compressed into feature vectors before being passed to the learning units. The feature vectors contain various statistical measures of the processed CMB maps that boost cosmic string detectability. Our proposed classifiers, after training, give results similar to or better than claimed detectability levels from other methods for string tension, Gμ. They can make 3σ detection of strings with Gμ ≳ 2.1 × 10-10 for noise-free, 0.9'-resolution CMB observations. The minimum detectable tension increases to Gμ ≳ 3.0 × 10-8 for a more realistic, CMB S4-like (II) strategy, improving over previous results.
Cosmic String Detection with Tree-Based Machine Learning
NASA Astrophysics Data System (ADS)
Vafaei Sadr, A.; Farhang, M.; Movahed, S. M. S.; Bassett, B.; Kunz, M.
2018-05-01
We explore the use of random forest and gradient boosting, two powerful tree-based machine learning algorithms, for the detection of cosmic strings in maps of the cosmic microwave background (CMB), through their unique Gott-Kaiser-Stebbins effect on the temperature anisotropies. The information in the maps is compressed into feature vectors before being passed to the learning units. The feature vectors contain various statistical measures of the processed CMB maps that boost cosmic string detectability. Our proposed classifiers, after training, give results similar to or better than claimed detectability levels from other methods for string tension, Gμ. They can make 3σ detection of strings with Gμ ≳ 2.1 × 10-10 for noise-free, 0.9΄-resolution CMB observations. The minimum detectable tension increases to Gμ ≳ 3.0 × 10-8 for a more realistic, CMB S4-like (II) strategy, improving over previous results.
Feed-forward and feedback projections of midbrain reticular formation neurons in the cat
Perkins, Eddie; May, Paul J.; Warren, Susan
2014-01-01
Gaze changes involving the eyes and head are orchestrated by brainstem gaze centers found within the superior colliculus (SC), paramedian pontine reticular formation (PPRF), and medullary reticular formation (MdRF). The mesencephalic reticular formation (MRF) also plays a role in gaze. It receives a major input from the ipsilateral SC and contains cells that fire in relation to gaze changes. Moreover, it provides a feedback projection to the SC and feed-forward projections to the PPRF and MdRF. We sought to determine whether these MRF feedback and feed-forward projections originate from the same or different neuronal populations by utilizing paired fluorescent retrograde tracers in cats. Specifically, we tested: 1. whether MRF neurons that control eye movements form a single population by injecting the SC and PPRF with different tracers, and 2. whether MRF neurons that control head movements form a single population by injecting the SC and MdRF with different tracers. In neither case were double labeled neurons observed, indicating that feedback and feed-forward projections originate from separate MRF populations. In both cases, the labeled reticulotectal and reticuloreticular neurons were distributed bilaterally in the MRF. However, neurons projecting to the MdRF were generally constrained to the medial half of the MRF, while those projecting to the PPRF, like MRF reticulotectal neurons, were spread throughout the mediolateral axis. Thus, the medial MRF may be specialized for control of head movements, with control of eye movements being more widespread in this structure. PMID:24454280
Feed-forward and feedback projections of midbrain reticular formation neurons in the cat.
Perkins, Eddie; May, Paul J; Warren, Susan
2014-01-10
Gaze changes involving the eyes and head are orchestrated by brainstem gaze centers found within the superior colliculus (SC), paramedian pontine reticular formation (PPRF), and medullary reticular formation (MdRF). The mesencephalic reticular formation (MRF) also plays a role in gaze. It receives a major input from the ipsilateral SC and contains cells that fire in relation to gaze changes. Moreover, it provides a feedback projection to the SC and feed-forward projections to the PPRF and MdRF. We sought to determine whether these MRF feedback and feed-forward projections originate from the same or different neuronal populations by utilizing paired fluorescent retrograde tracers in cats. Specifically, we tested: 1. whether MRF neurons that control eye movements form a single population by injecting the SC and PPRF with different tracers, and 2. whether MRF neurons that control head movements form a single population by injecting the SC and MdRF with different tracers. In neither case were double labeled neurons observed, indicating that feedback and feed-forward projections originate from separate MRF populations. In both cases, the labeled reticulotectal and reticuloreticular neurons were distributed bilaterally in the MRF. However, neurons projecting to the MdRF were generally constrained to the medial half of the MRF, while those projecting to the PPRF, like MRF reticulotectal neurons, were spread throughout the mediolateral axis. Thus, the medial MRF may be specialized for control of head movements, with control of eye movements being more widespread in this structure.
Fine figure correction and other applications using novel MRF fluid designed for ultra-low roughness
NASA Astrophysics Data System (ADS)
Maloney, Chris; Oswald, Eric S.; Dumas, Paul
2015-10-01
An increasing number of technologies require ultra-low roughness (ULR) surfaces. Magnetorheological Finishing (MRF) is one of the options for meeting the roughness specifications for high-energy laser, EUV and X-ray applications. A novel MRF fluid, called C30, has been developed to finish surfaces to ULR. This novel MRF fluid is able to achieve <1.5Å RMS roughness on fused silica and other materials, but has a lower material removal rate with respect to other MRF fluids. As a result of these properties, C30 can also be used for applications in addition to finishing ULR surfaces. These applications include fine figure correction, figure correcting extremely soft materials and removing cosmetic defects. The effectiveness of these new applications is explored through experimental data. The low removal rate of C30 gives MRF the capability to fine figure correct low amplitude errors that are usually difficult to correct with higher removal rate fluids. The ability to figure correct extremely soft materials opens up MRF to a new realm of materials that are difficult to polish. C30 also offers the ability to remove cosmetic defects that often lead to failure during visual quality inspections. These new applications for C30 expand the niche in which MRF is typically used for.
MR fingerprinting using the quick echo splitting NMR imaging technique.
Jiang, Yun; Ma, Dan; Jerecic, Renate; Duerk, Jeffrey; Seiberlich, Nicole; Gulani, Vikas; Griswold, Mark A
2017-03-01
The purpose of the study is to develop a quantitative method for the relaxation properties with a reduced radio frequency (RF) power deposition by combining magnetic resonance fingerprinting (MRF) technique with quick echo splitting NMR imaging technique (QUEST). A QUEST-based MRF sequence was implemented to acquire high-order echoes by increasing the gaps between RF pulses. Bloch simulations were used to calculate a dictionary containing the range of physically plausible signal evolutions using a range of T 1 and T 2 values based on the pulse sequence. MRF-QUEST was evaluated by comparing to the results of spin-echo methods. The specific absorption rate (SAR) of MRF-QUEST was compared with the clinically available methods. MRF-QUEST quantifies the relaxation properties with good accuracy at the estimated head SAR of 0.03 W/kg. T 1 and T 2 values estimated by MRF-QUEST are in good agreement with the traditional methods. The combination of the MRF and the QUEST provides an accurate quantification of T 1 and T 2 simultaneously with reduced RF power deposition. The resulting lower SAR may provide a new acquisition strategy for MRF when RF energy deposition is problematic. Magn Reson Med 77:979-988, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Vehicle Detection for RCTA/ANS (Autonomous Navigation System)
NASA Technical Reports Server (NTRS)
Brennan, Shane; Bajracharya, Max; Matthies, Larry H.; Howard, Andrew B.
2012-01-01
Using a stereo camera pair, imagery is acquired and processed through the JPLV stereo processing pipeline. From this stereo data, large 3D blobs are found. These blobs are then described and classified by their shape to determine which are vehicles and which are not. Prior vehicle detection algorithms are either targeted to specific domains, such as following lead cars, or are intensity- based methods that involve learning typical vehicle appearances from a large corpus of training data. In order to detect vehicles, the JPL Vehicle Detection (JVD) algorithm goes through the following steps: 1. Take as input a left disparity image and left rectified image from JPLV stereo. 2. Project the disparity data onto a two-dimensional Cartesian map. 3. Perform some post-processing of the map built in the previous step in order to clean it up. 4. Take the processed map and find peaks. For each peak, grow it out into a map blob. These map blobs represent large, roughly vehicle-sized objects in the scene. 5. Take these map blobs and reject those that do not meet certain criteria. Build descriptors for the ones that remain. Pass these descriptors onto a classifier, which determines if the blob is a vehicle or not. The probability of detection is the probability that if a vehicle is present in the image, is visible, and un-occluded, then it will be detected by the JVD algorithm. In order to estimate this probability, eight sequences were ground-truthed from the RCTA (Robotics Collaborative Technology Alliances) program, totaling over 4,000 frames with 15 unique vehicles. Since these vehicles were observed at varying ranges, one is able to find the probability of detection as a function of range. At the time of this reporting, the JVD algorithm was tuned to perform best at cars seen from the front, rear, or either side, and perform poorly on vehicles seen from oblique angles.
Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model
Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong
2014-01-01
Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005
Wang, Niping; Perkins, Eddie; Zhou, Lan; Warren, Susan
2013-01-01
Omnipause neurons (OPNs) within the nucleus raphe interpositus (RIP) help gate the transition between fixation and saccadic eye movements by monosynaptically suppressing activity in premotor burst neurons during fixation, and releasing them during saccades. Premotor neuron activity is initiated by excitatory input from the superior colliculus (SC), but how the tectum's saccade-related activity turns off OPNs is not known. Since the central mesencephalic reticular formation (cMRF) is a major SC target, we explored whether this nucleus has the appropriate connections to support tectal gating of OPN activity. In dual-tracer experiments undertaken in macaque monkeys (Macaca fascicularis), cMRF neurons labeled retrogradely from injections into RIP had numerous anterogradely labeled terminals closely associated with them following SC injections. This suggested the presence of an SC–cMRF–RIP pathway. Furthermore, anterograde tracers injected into the cMRF of other macaques labeled axonal terminals in RIP, confirming this cMRF projection. To determine whether the cMRF projections gate OPN activity, postembedding electron microscopic immunochemistry was performed on anterogradely labeled cMRF terminals with antibody to GABA or glycine. Of the terminals analyzed, 51.4% were GABA positive, 35.5% were GABA negative, and most contacted glycinergic cells. In summary, a trans-cMRF pathway connecting the SC to the RIP is present. This pathway contains inhibitory elements that could help gate omnipause activity and allow other tectal drives to induce the bursts of firing in premotor neurons that are necessary for saccades. The non-GABAergic cMRF terminals may derive from fixation units in the cMRF. PMID:24107960
Afferent and efferent connections of the mesencephalic reticular formation in goldfish.
Luque, M A; Pérez-Pérez, M P; Herrero, L; Torres, B
2008-03-18
The physiology of the mesencephalic reticular formation (MRF) in goldfish suggests its contribution to eye and body movements, but the afferent and efferent connections underlying such movements have not been determined. Therefore, we injected the bidirectional tracer biotinylated dextran amine into functionally identified MRF sites. We found retrogradely labelled neurons and anterogradely labelled boutons within nuclei of the following brain regions: (1) the telencephalon: a weak and reciprocal connectivity was confined to the central zone of area dorsalis and ventral nucleus of area ventralis; (2) the diencephalon: reciprocal connections were abundant in the ventral and dorsal thalamic nuclei; the central pretectal nucleus was also reciprocally wired with the MRF, but only boutons were present in the superficial pretectal nucleus; the preoptic and suprachiasmatic nuclei showed abundant neurons and boutons; the MRF was reciprocally connected with the preglomerular complex and the anterior tuberal nucleus; (3) the mesencephalon: neurons and boutons were abundant within deep tectal layers; reciprocal connections were also present within the torus semicircularis and the contralateral MRF; neurons were abundant within the nucleus isthmi; and (4) the rhombencephalon: the superior and middle parts of the reticular formation received strong projections from the MRF, while the projection to the inferior area was weaker; sparse neurons were present throughout the reticular formation; a reciprocal connectivity was observed with the sensory trigeminal nucleus; the medial and magnocellular nuclei of the octaval column projected to the MRF. These results support the participation of the MRF in the orienting response. The MRF could also be involved in other motor tasks triggered by visual, auditory, vestibular, or somatosensory signals.
Courneyea, Lorraine; Beltran, Chris; Tseung, Hok Seum Wan Chan; Yu, Juan; Herman, Michael G
2014-06-01
Study the contributors to treatment time as a function of Mini-Ridge Filter (MRF) thickness to determine the optimal choice for breath-hold treatment of lung tumors in a synchrotron-based spot-scanning proton machine. Five different spot-scanning nozzles were simulated in TOPAS: four with MRFs of varying maximal thicknesses (6.15-24.6 mm) and one with no MRF. The MRFs were designed with ridges aligned along orthogonal directions transverse to the beam, with the number of ridges (4-16) increasing with MRF thickness. The material thickness given by these ridges approximately followed a Gaussian distribution. Using these simulations, Monte Carlo data were generated for treatment planning commissioning. For each nozzle, standard and stereotactic (SR) lung phantom treatment plans were created and assessed for delivery time and plan quality. Use of a MRF resulted in a reduction of the number of energy layers needed in treatment plans, decreasing the number of synchrotron spills needed and hence the treatment time. For standard plans, the treatment time per field without a MRF was 67.0 ± 0.1 s, whereas three of the four MRF plans had treatment times of less than 20 s per field; considered sufficiently low for a single breath-hold. For SR plans, the shortest treatment time achieved was 57.7 ± 1.9 s per field, compared to 95.5 ± 0.5 s without a MRF. There were diminishing gains in time reduction as the MRF thickness increased. Dose uniformity of the PTV was comparable across all plans; however, when the plans were normalized to have the same coverage, dose conformality decreased with MRF thickness, as measured by the lung V20%. Single breath-hold treatment times for plans with standard fractionation can be achieved through the use of a MRF, making this a viable option for motion mitigation in lung tumors. For stereotactic plans, while a MRF can reduce treatment times, multiple breath-holds would still be necessary due to the limit imposed by the proton extraction time. To balance treatment time and normal tissue dose, the ideal MRF choice was shown to be the thinnest option that is able to achieve the desired breath-hold timing.
Wang, Niping; Perkins, Eddie; Zhou, Lan; Warren, Susan; May, Paul J.
2017-01-01
The central mesencephalic reticular formation (cMRF) occupies much of the core of the midbrain tegmentum. Physiological studies indicate that it is involved in controlling gaze changes, particularly horizontal saccades. Anatomically, it receives input from the ipsilateral superior colliculus (SC) and it has downstream projections to the brainstem, including the horizontal gaze center located in the paramedian pontine reticular formation (PPRF). Consequently, it has been hypothesized that the cMRF plays a role in the spatiotemporal transformation needed to convert spatially coded collicular saccade signals into the temporally coded signals utilized by the premotor neurons of the horizontal gaze center. In this study, we used neuroanatomical tracers to examine the patterns of connectivity of the cMRF in macaque monkeys in order to determine whether the circuit organization supports this hypothesis. Since stimulation of the cMRF produces contraversive horizontal saccades and stimulation of the horizontal gaze center produces ipsiversive saccades, this would require an excitatory cMRF projection to the contralateral PPRF. Injections of anterograde tracers into the cMRF did produce labeled terminals within the PPRF. However, the terminations were denser ipsilaterally. Since the PPRF located contralateral to the movement direction is generally considered to be silent during a horizontal saccade, we then tested the hypothesis that this ipsilateral reticuloreticular pathway might be inhibitory. The ultrastructure of ipsilateral terminals was heterogeneous, with some displaying more extensive postsynaptic densities than others. Postembedding immunohistochemistry for gamma-aminobutyric acid (GABA) indicated that only a portion (35%) of these cMRF terminals are GABAergic. Dual tracer experiments were undertaken to determine whether the SC provides input to cMRF reticuloreticular neurons projecting to the ipsilateral pons. Retrogradely labeled reticuloreticular neurons were predominantly distributed in the ipsilateral cMRF. Anterogradely labeled tectal terminals were observed in close association with a portion of these retrogradely labeled reticuloreticular neurons. Taken together, these results suggest that the SC does have connections with reticuloreticular neurons in the cMRF. However, the predominantly excitatory nature of the ipsilateral reticuloreticular projection argues against the hypothesis that this cMRF pathway is solely responsible for producing a spatiotemporal transformation of the collicular saccade signal. PMID:28487639
Mapping a battlefield simulation onto message-passing parallel architectures
NASA Technical Reports Server (NTRS)
Nicol, David M.
1987-01-01
Perhaps the most critical problem in distributed simulation is that of mapping: without an effective mapping of workload to processors the speedup potential of parallel processing cannot be realized. Mapping a simulation onto a message-passing architecture is especially difficult when the computational workload dynamically changes as a function of time and space; this is exactly the situation faced by battlefield simulations. This paper studies an approach where the simulated battlefield domain is first partitioned into many regions of equal size; typically there are more regions than processors. The regions are then assigned to processors; a processor is responsible for performing all simulation activity associated with the regions. The assignment algorithm is quite simple and attempts to balance load by exploiting locality of workload intensity. The performance of this technique is studied on a simple battlefield simulation implemented on the Flex/32 multiprocessor. Measurements show that the proposed method achieves reasonable processor efficiencies. Furthermore, the method shows promise for use in dynamic remapping of the simulation.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
A shape prior-based MRF model for 3D masseter muscle segmentation
NASA Astrophysics Data System (ADS)
Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe
2012-02-01
Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.
Magnetorheological Finishing for Imprinting Continuous Phase Plate Structure onto Optical Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menapace, J A; Dixit, S N; Genin, F Y
2004-01-05
Magnetorheological finishing (MRF) techniques have been developed to manufacture continuous phase plates (CPP's) and custom phase corrective structures on polished fused silica surfaces. These phase structures are important for laser applications requiring precise manipulation and control of beam-shape, energy distribution, and wavefront profile. The MRF's unique deterministic-sub-aperture polishing characteristics make it possible to imprint complex topographical information onto optical surfaces at spatial scale-lengths approaching 1 mm. In this study, we present the results of experiments and model calculations that explore imprinting two-dimensional sinusoidal structures. Results show how the MRF removal function impacts and limits imprint fidelity and what must bemore » done to arrive at a high quality surface. We also present several examples of this imprinting technology for fabrication of phase correction plates and CPPs for use at high fluences.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuura, T; Fujii, Y; Takao, S
Purpose: To develop a method for treating shallow and moving tumors (e.g., lung tumors) with respiratory-gated spot-scanning proton therapy using real-time image guidance (RTPT). Methods: An applicator was developed which can be installed by hand on the treatment nozzle. The mechanical design was considered such that the Bragg peaks are placed at the patient surface while a sufficient field of view (FOV) of fluoroscopic X-rays was maintained during the proton beam delivery. To reduce the treatment time maintaining the robustness of the dose distribution with respect to motion, a mini-ridge filter (MRF) was sandwiched between two energy absorbers. The measurementsmore » were performed to obtain a data for beam modeling and to verify the spot position-invariance of a pencil beam dose distribution. For three lung cancer patients, treatment plans were made with and without the MRF and the effects of the MRF were evaluated. Next, the effect of respiratory motion on the dose distribution was investigated. Results: To scan the proton beam over a 14 x 14 cm area while maintaining the φ16 cm of fluoroscopic FOV, the lower face of the applicator was set 22 cm upstream of the isocenter. With an additional range variance of 2.2 mm and peak-to-peak distance of 4 mm of the MRF, the pencil beam dose distribution was unchanged with the displacement of the spot position. The quality of the treatment plans was not worsened by the MRF. With the MRF, the number of energy layers was reduced to less than half and the treatment time by 26–37%. The simulation study showed that the interplay effect was successfully suppressed by respiratory-gating both with and without MRF. Conclusions: The spot-scanning proton beam was successfully delivered to shallow and moving tumors within a sufficiently short time by installing the developed applicator at the RTPT nozzle.« less
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
Research of the multimodal brain-tumor segmentation algorithm
NASA Astrophysics Data System (ADS)
Lu, Yisu; Chen, Wufan
2015-12-01
It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.
Exploring Evidence Aggregation Methods and External Expansion Sources for Medical Record Search
2012-11-01
Equation 3 using Indri in the same way as our previous work [12]. We denoted this model as MRM . A Combined Model We linearly combine MRF and MRM to get...retrieving indexing visits ranking III RbM VRM baseline/MRF/ MRM models ICD, NEG MbR Figure 1: Merging results from two different...retrieval model MRM with one expansion collection at a time to explore the expansion effectiveness of each collection as show in Table 5. As we can
A magnetorheological fluid locking device
NASA Astrophysics Data System (ADS)
Kavlicoglu, Barkan; Liu, Yanming
2011-04-01
A magnetorheological fluid (MRF) device is designed to provide a static locking force caused by the operation of a controllable MRF valve. The intent is to introduce an MRF device which provides the locking force of a fifth wheel coupler while maintaining the "powerless" locking capability when required. A passive magnetic field supplied by a permanent magnet provides a powerless locking resistance force. The passively closed MRF valve provides sufficient reaction force to eliminate axial displacement to a pre-defined force value. Unlocking of the device is provided by means of an electromagnet which re-routes the magnetic field distribution along the MR valve, and minimizes the resistance. Three dimensional electromagnetic finite element analyses are performed to optimize the MRF lock valve performance. The MRF locking valve is fabricated and tested for installation on a truck fifth wheel application. An experimental setup, resembling actual working conditions, is designed and tests are conducted on vehicle interface schemes. The powerless-locking capacity and the unlocking process with minimal resistance are experimentally demonstrated.
Low rank magnetic resonance fingerprinting.
Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C
2016-08-01
Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.
SU-G-BRB-16: Vulnerabilities in the Gamma Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neal, B; Siebers, J
Purpose: To explore vulnerabilities in the gamma index metric that undermine its wide use as a radiation therapy quality assurance tool. Methods: 2D test field pairs (images) are created specifically to achieve high gamma passing rates, but to also include gross errors by exploiting the distance-to-agreement and percent-passing components of the metric. The first set has no requirement of clinical practicality, but is intended to expose vulnerabilities. The second set exposes clinically realistic vulnerabilities. To circumvent limitations inherent to user-specific tuning of prediction algorithms to match measurements, digital test cases are manually constructed, thereby mimicking high-quality image prediction. Results: Withmore » a 3 mm distance-to-agreement metric, changing field size by ±6 mm results in a gamma passing rate over 99%. For a uniform field, a lattice of passing points spaced 5 mm apart results in a passing rate of 100%. Exploiting the percent-passing component, a 10×10 cm{sup 2} field can have a 95% passing rate when an 8 cm{sup 2}=2.8×2.8 cm{sup 2} highly out-of-tolerance (e.g. zero dose) square is missing from the comparison image. For clinically realistic vulnerabilities, an arc plan for which a 2D image is created can have a >95% passing rate solely due to agreement in the lateral spillage, with the failing 5% in the critical target region. A field with an integrated boost (e.g whole brain plus small metastases) could neglect the metastases entirely, yet still pass with a 95% threshold. All the failure modes described would be visually apparent on a gamma-map image. Conclusion: The %gamma<1 metric has significant vulnerabilities. High passing rates can obscure critical faults in hypothetical and delivered radiation doses. Great caution should be used with gamma as a QA metric; users should inspect the gamma-map. Visual analysis of gamma-maps may be impractical for cine acquisition.« less
Plasmid mapping computer program.
Nolan, G P; Maina, C V; Szalay, A A
1984-01-01
Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105
A post-processing algorithm for time domain pitch trackers
NASA Astrophysics Data System (ADS)
Specker, P.
1983-01-01
This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
NASA Astrophysics Data System (ADS)
Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.
2016-03-01
Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.
MR fingerprinting with simultaneous B1 estimation.
Buonincontri, Guido; Sawiak, Stephen J
2016-10-01
MR fingerprinting (MRF) can be used for quantitative estimation of physical parameters in MRI. Here, we extend the method to incorporate B1 estimation. The acquisition is based on steady state free precession MR fingerprinting with a Cartesian trajectory. To increase the sensitivity to the B1 profile, abrupt changes in flip angle were introduced in the sequence. Slice profile and B1 effects were included in the dictionary and the results from two- and three-dimensional (3D) acquisitions were compared. Acceleration was demonstrated using retrospective undersampling in the phase encode directions of 3D data exploiting redundancy between MRF frames at the edges of k-space. Without B1 estimation, T2 and B1 were inaccurate by more than 20%. Abrupt changes in flip angle improved B1 maps. T1 and T2 values obtained with the new MRF methods agree with classical spin echo measurements and are independent of the B1 field profile. When using view sharing reconstruction, results remained accurate (error <10%) when sampling under 10% of k-space from the 3D data. The methods demonstrated here can successfully measure T1, T2, and B1. Errors due to slice profile can be substantially reduced by including its effect in the dictionary or acquiring data in 3D. Magn Reson Med 76:1127-1135, 2016. © 2015 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made. © 2015 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Improvement of magnetorheological finishing surface quality by nanoparticle jet polishing
NASA Astrophysics Data System (ADS)
Peng, Wenqiang; Li, Shengyi; Guan, Chaoliang; Shen, Xinmin; Dai, Yifan; Wang, Zhuo
2013-04-01
Nanoparticle jet polishing (NJP) is presented as a posttreatment to remove magnetorheological finishing (MRF) marks. In the NJP process the material is removed by chemical impact reaction, and the material removal rate of convex part is larger than that of the concave part. Smoothing thus can progress automatically in the NJP process. In the experiment, a silica glass sample polished by MRF was polished by NJP. Experiment results showed the MRF marks were removed clearly. The uniform polishing process shows that the NJP process can remove the MRF marks without destroying the original surface figure. The surface root-mean-square roughness is improved from 0.72 to 0.41 nm. power spectral density analysis indicates the surface quality is improved, and the experimental result validates effective removal of MRF marks by NJP.
Calibration and prediction of removal function in magnetorheological finishing.
Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng
2010-01-20
A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.
Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.
Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter
2017-09-01
An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.
Monocular Depth Perception and Robotic Grasping of Novel Objects
2009-06-01
resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show
NASA Technical Reports Server (NTRS)
Bateman, M. G.; Mach, D. M.; McCaul, M. G.; Bailey, J. C.; Christian, H. J.
2008-01-01
The Lightning Imaging Sensor (LIS) aboard the TRMM satellite has been collecting optical lightning data since November 1997. A Lightning Mapping Array (LMA) that senses VHF impulses from lightning was installed in North Alabama in the Fall of 2001. A dataset has been compiled to compare data from both instruments for all times when the LIS was passing over the domain of our LMA. We have algorithms for both instruments to group pixels or point sources into lightning flashes. This study presents the comparison statistics of the flash data output (flash duration, size, and amplitude) from both algorithms. We will present the results of this comparison study and show "point-level" data to explain the differences. AS we head closer to realizing a Global Lightning Mapper (GLM) on GOES-R, better understanding and ground truth of each of these instruments and their respective flash algorithms is needed.
A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush
1997-01-01
Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.
Estimation of perfusion properties with MR Fingerprinting Arterial Spin Labeling.
Wright, Katherine L; Jiang, Yun; Ma, Dan; Noll, Douglas C; Griswold, Mark A; Gulani, Vikas; Hernandez-Garcia, Luis
2018-03-12
In this study, the acquisition of ASL data and quantification of multiple hemodynamic parameters was explored using a Magnetic Resonance Fingerprinting (MRF) approach. A pseudo-continuous ASL labeling scheme was used with pseudo-randomized timings to acquire the MRF ASL data in a 2.5 min acquisition. A large dictionary of MRF ASL signals was generated by combining a wide range of physical and hemodynamic properties with the pseudo-random MRF ASL sequence and a two-compartment model. The acquired signals were matched to the dictionary to provide simultaneous quantification of cerebral blood flow, tissue time-to-peak, cerebral blood volume, arterial time-to-peak, B 1 , and T 1. A study in seven healthy volunteers resulted in the following values across the population in grey matter (mean ± standard deviation): cerebral blood flow of 69.1 ± 6.1 ml/min/100 g, arterial time-to-peak of 1.5 ± 0.1 s, tissue time-to-peak of 1.5 ± 0.1 s, T 1 of 1634 ms, cerebral blood volume of 0.0048 ± 0.0005. The CBF measurements were compared to standard pCASL CBF estimates using a one-compartment model, and a Bland-Altman analysis showed good agreement with a minor bias. Repeatability was tested in five volunteers in the same exam session, and no statistical difference was seen. In addition to this validation, the MRF ASL acquisition's sensitivity to the physical and physiological parameters of interest was studied numerically. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2018-05-01
The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.
Descriptive and Computer Aided Drawing Perspective on an Unfolded Polyhedral Projection Surface
NASA Astrophysics Data System (ADS)
Dzwierzynska, Jolanta
2017-10-01
The aim of the herby study is to develop a method of direct and practical mapping of perspective on an unfolded prism polyhedral projection surface. The considered perspective representation is a rectilinear central projection onto a surface composed of several flat elements. In the paper two descriptive methods of drawing perspective are presented: direct and indirect. The graphical mapping of the effects of the representation is realized directly on the unfolded flat projection surface. That is due to the projective and graphical connection between points displayed on the polyhedral background and their counterparts received on the unfolded flat surface. For a significant improvement of the construction of line, analytical algorithms are formulated. They draw a perspective image of a segment of line passing through two different points determined by their coordinates in a spatial coordinate system of axis x, y, z. Compared to other perspective construction methods that use information about points, for computer vision and the computer aided design, our algorithms utilize data about lines, which are applied very often in architectural forms. Possibility of drawing lines in the considered perspective enables drawing an edge perspective image of an architectural object. The application of the changeable base elements of perspective as a horizon height and a station point location enable drawing perspective image from different viewing positions. The analytical algorithms for drawing perspective images are formulated in Mathcad software, however, they can be implemented in the majority of computer graphical packages, which can make drawing perspective more efficient and easier. The representation presented in the paper and the way of its direct mapping on the flat unfolded projection surface can find application in presentation of architectural space in advertisement and art.
Menapace, Joseph A; Ehrmann, Paul E; Bayramian, Andrew J; Bullington, Amber; Di Nicola, Jean-Michel G; Haefner, Constantin; Jarboe, Jeffrey; Marshall, Christopher; Schaffers, Kathleen I; Smith, Cal
2016-07-01
Corrective optical elements form an important part of high-precision optical systems. We have developed a method to manufacture high-gradient corrective optical elements for high-power laser systems using deterministic magnetorheological finishing (MRF) imprinting technology. Several process factors need to be considered for polishing ultraprecise topographical structures onto optical surfaces using MRF. They include proper selection of MRF removal function and wheel sizes, detailed MRF tool and interferometry alignment, and optimized MRF polishing schedules. Dependable interferometry also is a key factor in high-gradient component manufacture. A wavefront attenuating cell, which enables reliable measurement of gradients beyond what is attainable using conventional interferometry, is discussed. The results of MRF imprinting a 23 μm deep structure containing gradients over 1.6 μm / mm onto a fused-silica window are presented as an example of the technique's capabilities. This high-gradient element serves as a thermal correction plate in the high-repetition-rate advanced petawatt laser system currently being built at Lawrence Livermore National Laboratory.
Menapace, Joseph A.; Ehrmann, Paul E.; Bayramian, Andrew J.; ...
2016-03-15
Corrective optical elements form an important part of high-precision optical systems. We have developed a method to manufacture high-gradient corrective optical elements for high-power laser systems using deterministic magnetorheological finishing (MRF) imprinting technology. Several process factors need to be considered for polishing ultraprecise topographical structures onto optical surfaces using MRF. They include proper selection of MRF removal function and wheel sizes, detailed MRF tool and interferometry alignment, and optimized MRF polishing schedules. Dependable interferometry also is a key factor in high-gradient component manufacture. A wavefront attenuating cell, which enables reliable measurement of gradients beyond what is attainable using conventional interferometry,more » is discussed. The results of MRF imprinting a 23 μm deep structure containing gradients over 1.6 μm / mm onto a fused-silica window are presented as an example of the technique’s capabilities. As a result, this high-gradient element serves as a thermal correction plate in the high-repetition-rate advanced petawatt laser system currently being built at Lawrence Livermore National Laboratory.« less
Efficient algorithms for dilated mappings of binary trees
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf
1990-01-01
The problem is addressed to find a 1-1 mapping of the vertices of a binary tree onto those of a target binary tree such that the son of a node on the first binary tree is mapped onto a descendent of the image of that node in the second binary tree. There are two natural measures of the cost of this mapping, namely the dilation cost, i.e., the maximum distance in the target binary tree between the images of vertices that are adjacent in the original tree. The other measure, expansion cost, is defined as the number of extra nodes/edges to be added to the target binary tree in order to ensure a 1-1 mapping. An efficient algorithm to find a mapping of one binary tree onto another is described. It is shown that it is possible to minimize one cost of mapping at the expense of the other. This problem arises when designing pipelined arithmetic logic units (ALU) for special purpose computers. The pipeline is composed of ALU chips connected in the form of a binary tree. The operands to the pipeline can be supplied to the leaf nodes of the binary tree which then process and pass the results up to their parents. The final result is available at the root. As each new application may require a distinct nesting of operations, it is useful to be able to find a good mapping of a new binary tree over existing ALU tree. Another problem arises if every distinct required binary tree is known beforehand. Here it is useful to hardwire the pipeline in the form of a minimal supertree that contains all required binary trees.
Spatial-spectral blood cell classification with microscopic hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng
2017-10-01
Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.
A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence
NASA Astrophysics Data System (ADS)
Liu, Hui; Jin, Cong
2017-03-01
In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.
Dingledine, Raymond; Kelly, J. S.
1977-01-01
1. In cats anaesthetized with halothane and nitrous oxide, the responses to iontophoretically applied acetylcholine (ACh) and to high-frequency stimulation of the mid-brain reticular formation (MRF) were tested on spontaneously active neurones in the nucleus reticularis thalami and underlying ventrobasal complex. 2. The initial response to MRF stimulation of 90% of the ACh-inhibited neurones found in the region of the dorsolateral nucleus reticularis was an inhibition. Conversely, the initial response of 82% of the ACh-excited neurones in the ventrobasal complex was an excitation. Neurones in the rostral pole of the nucleus reticularis were inhibited by both ACh and RMF stimulation. 3. The mean latency (and s.e. of mean) for the MRF-evoked inhibition was 13·7 ± 3·2 ms (n = 42) and that for the MRF-evoked excitation, 44.1 ± 4.2 ms (n = 35). 4. The ACh-evoked inhibitions were blocked by iontophoretic atropine, in doses that did not block amino acid-evoked inhibition. In twenty-four ACh-inhibited neurones the effect of iontophoretic atropine was tested on MRF-evoked inhibition. In all twenty-four neurones atropine had no effect on the early phase of MRF-evoked inhibition but weakly antagonized the late phase of inhibition in nine of fourteen neurones. 5. Interspike-interval histograms showed that the firing pattern of neurones in the nucleus reticularis was characterized by periods of prolonged, high-frequency bursting. Both the ACh-evoked inhibitions and the late phase of MRF-evoked inhibitions were accompanied by an increased burst activity. In contrast, iontophoretic atropine tended to suppress burst activity. 6. The possibility is discussed that electrical stimulation of the MRF activates an inhibitory cholinergic projection to the nucleus reticularis. Since neurones of the nucleus reticularis have been shown to inhibit thalamic relay cells, activation of this inhibitory pathway may play a role in MRF-evoked facilitation of thalamo-cortical relay transmission and the associated electrocortical desynchronization. PMID:915830
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Determination of MLC model parameters for Monaco using commercial diode arrays.
Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian
2016-07-08
Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors
Analyzing and improving surface texture by dual-rotation magnetorheological finishing
NASA Astrophysics Data System (ADS)
Wang, Yuyue; Zhang, Yun; Feng, Zhijing
2016-01-01
The main advantages of magnetorheological finishing (MRF) are its high convergence rate of surface error, the ability of polishing aspheric surfaces and nearly no subsurface damage. However, common MRF produces directional surface texture due to the constant flow direction of the magnetorheological (MR) polishing fluid. This paper studies the mechanism of surface texture formation by texture modeling. Dual-rotation magnetorheological finishing (DRMRF) is presented to suppress directional surface texture after analyzing the results of the texture model for common MRF. The results of the surface texture model for DRMRF and the proposed quantitative method based on mathematical statistics indicate the effective suppression of directional surface texture. An experimental setup is developed and experiments show directional surface texture and no directional surface texture in common MRF and DRMRF, respectively. As a result, the surface roughness of DRMRF is 0.578 nm (root-mean-square value) which is lower than 1.109 nm in common MRF.
Magneto-rheological fluid shock absorbers for HMMWV
NASA Astrophysics Data System (ADS)
Gordaninejad, Faramarz; Kelso, Shawn P.
2000-04-01
This paper presents the development and evaluation of a controllable, semi-active magneto-rheological fluid (MRF) shock absorber for a High Mobility Multi-purpose Wheeled Vehicle (HMMWV). The University of Nevada, Reno (UNR) MRF damper is tailored for structures and ground vehicles that undergo a wide range of dynamic loading. It also has the capability for unique rebound and compression characteristics. The new MRF shock absorber emulates the original equipment manufacturer (OEM) shock absorber behavior in passive mode, and provides a wide controllable damping force range. A theoretical study is performed to evaluate the UNR MRF shock absorber. The Bingham plastic theory is employed to model the nonlinear behavior of the MR fluid. A fluid-mechanics-based theoretical model along with a three-dimensional finite element electromagnetic analysis is utilized to predict the MRF damper performance. The theoretical results are compared with experimental data and are demonstrated to be in excellent agreement.
Health-related quality of life measurement in patients with chronic respiratory failure.
Oga, Toru; Windisch, Wolfram; Handa, Tomohiro; Hirai, Toyohiro; Chin, Kazuo
2018-05-01
The improvement of health-related quality of life (HRQL) is an important goal in managing patients with chronic respiratory failure (CRF) receiving long-term oxygen therapy (LTOT) and/or domiciliary noninvasive ventilation (NIV). Two condition-specific HRQL questionnaires have been developed to specifically assess these patients: the Maugeri Respiratory Failure Questionnaire (MRF) and the Severe Respiratory Insufficiency Questionnaire (SRI). The MRF is more advantageous in its ease of completion; conversely, the SRI measures diversified health impairments more multi-dimensionally and discriminatively with greater balance, especially in patients receiving NIV. The SRI is available in many different languages as a result of back-translation and validation processes, and is widely validated for various disorders such as chronic obstructive pulmonary disease, restrictive thoracic disorders, neuromuscular disorders, and obesity hypoventilation syndrome, among others. Dyspnea and psychological status were the main determinants for both questionnaires, while the MRF tended to place more emphasis on activity limitations than SRI. In comparison to existing generic questionnaires such as the Medical Outcomes Study 36-item short form (SF-36) and disease-specific questionnaires such as the St. George's Respiratory Questionnaire (SGRQ) and the Chronic Respiratory Disease Questionnaire (CRQ), both the MRF and the SRI have been shown to be valid and reliable, and have better discriminatory, evaluative, and predictive features than other questionnaires. Thus, in assessing the HRQL of patients with CRF using LTOT and/or NIV, we might consider avoiding the use of the SF-36 or even the SGRQ or CRQ alone and consider using the CRF-specific SRI and MRF in addition to existing generic and/or disease-specific questionnaires. Copyright © 2018 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, Lei; Choi, Young-Tai; Liao, Chang-Rong; Wereley, Norman M.
2015-05-01
A key requirement for the commercialization of various magnetorheological fluid (MRF)-based applications is sedimentation stability. In this study, a high viscosity linear polysiloxane (HVLP), which has been used for shock absorbers in heavy equipment, is proposed as a new carrier fluid in highly stable MRFs. The HVLP is known to be a thixotropic (i.e., shear thinning) fluid that shows very high viscosity at very low shear rate and low viscosity at higher shear rate. In this study, using the shear rheometer, the significant thixotropic behavior of the HVLP was experimentally confirmed. In addition, a HVLP carrier fluid-based MRF (HVLP MRF) with 26 vol. % was synthesized and its sedimentation characteristics were experimentally investigated. But, because of the opacity of the HVLP MRF, no mudline can be visually observed. Hence, a vertical axis inductance monitoring system (VAIMS) applied to a circular column of fluid was used to evaluate sedimentation behavior by correlating measured inductance with the volume fraction of dispersed particles (i.e., Fe). Using the VAIMS, Fe concentration (i.e., volume fraction) was monitored for 28 days with a measurement taken every four days, as well as one measurement after 96 days to characterize long-term sedimentation stability. Finally, the concentration of the HVLP MRF as a function of the depth in the column and time, as well as the concentration change versus the depth in the column, are presented and compared with those of a commercially available MRF (i.e., Lord MRF-126CD).
Wang, Shan; Cai, Xin; Xue, Kai; Chen, Hong
2011-02-01
PCR-RFLP was applied to analyse polymorphisms within the MRF4 and heart fatty acid-binding protein (H-FABP) gene for correlation studies with growth traits in three-month-old Qinchuan (QQ), Qinchuan × Limousin (LQ) and Qinchuan × Red Angus (AQ) cattle. The results showed that 874 bp PCR products of MRF4 digested with XbaI and 2,075 bp PCR products of H-FABP digested with HaeIII were polymorphic in the three populations. Moreover, the frequencies of allele A at MRF4 locus and allele B at H-FABP locus in the QQ, AQ, and LQ populations were 0.8358/0.8888/0.8273 and 0.8358/0.7500/0.8195 respectively. Allele A at MRF4 locus and allele B at H-FABP locus were dominant in the three populations. No statistically significant differences in growth traits were observed among the genotypes of the all three populations at H-FABP locus. However, the association of MRF4 polymorphism with growth traits was then determined in all three populations. The body weight, withers height, heart girth and height at hip cross of individuals with genotype AA were higher than those with genotype AB or BB (P < 0.05). Therefore, we suggest that the MRF4 gene may function in the control or expression of growth traits, particularly body weight, withers height, heart girth and height at hip cross.
NASA Astrophysics Data System (ADS)
Qin, Y.; Lu, P.; Li, Z.
2018-04-01
Landslide inventory mapping is essential for hazard assessment and mitigation. In most previous studies, landslide mapping was achieved by visual interpretation of aerial photos and remote sensing images. However, such method is labor-intensive and time-consuming, especially over large areas. Although a number of semi-automatic landslide mapping methods have been proposed over the past few years, limitations remain in terms of their applicability over different study areas and data, and there is large room for improvement in terms of the accuracy and automation degree. For these reasons, we developed a change detection-based Markov Random Field (CDMRF) method for landslide inventory mapping. The proposed method mainly includes two steps: 1) change detection-based multi-threshold for training samples generation and 2) MRF for landslide inventory mapping. Compared with the previous methods, the proposed method in this study has three advantages: 1) it combines multiple image difference techniques with multi-threshold method to generate reliable training samples; 2) it takes the spectral characteristics of landslides into account; and 3) it is highly automatic with little parameter tuning. The proposed method was applied for regional landslides mapping from 10 m Sentinel-2 images in Western China. Results corroborated the effectiveness and applicability of the proposed method especially the capability of rapid landslide mapping. Some directions for future research are offered. This study to our knowledge is the first attempt to map landslides from free and medium resolution satellite (i.e., Sentinel-2) images in China.
Acidic magnetorheological finishing of infrared polycrystalline materials.
Salzman, S; Romanofsky, H J; West, G; Marshall, K L; Jacobs, S D; Lambropoulos, J C
2016-10-20
Chemical-vapor-deposited (CVD) ZnS is an example of a polycrystalline material that is difficult to polish smoothly via the magnetorheological finishing (MRF) technique. When MRF-polished, the internal infrastructure of the material tends to manifest on the surface as millimeter-sized "pebbles," and the surface roughness observed is considerably high. The fluid's parameters important to developing a magnetorheological (MR) fluid that is capable of polishing CVD ZnS smoothly were previously discussed and presented. These parameters were acidic pH (∼4.5) and low viscosity (∼47 cP). MRF with such a unique MR fluid was shown to reduce surface artifacts in the form of pebbles; however, surface microroughness was still relatively high because of the absence of a polishing abrasive in the formulation. In this study, we examine the effect of two polishing abrasives-alumina and nanodiamond-on the surface finish of several CVD ZnS substrates, and on other important IR polycrystalline materials that were finished with acidic MR fluids containing these two polishing abrasives. Surface microroughness results obtained were as low as ∼28 nm peak-to-valley and ∼6-nm root mean square.
Acidic magnetorheological finishing of infrared polycrystalline materials
Salzman, S.; Romanofsky, H. J.; West, G.; ...
2016-10-12
Here, chemical-vapor–deposited (CVD) ZnS is an example of a polycrystalline material that is difficult to polish smoothly via the magnetorheological–finishing (MRF) technique. When MRF-polished, the internal infrastructure of the material tends to manifest on the surface as millimeter-sized “pebbles,” and the surface roughness observed is considerably high. The fluid’s parameters important to developing a magnetorheological (MR) fluid that is capable of polishing CVD ZnS smoothly were previously discussed and presented. These parameters were acidic pH (~4.5) and low viscosity (~47 cP). MRF with such a unique MR fluid was shown to reduce surface artifacts in the form of pebbles; however,more » surface microroughness was still relatively high because of the absence of a polishing abrasive in the formulation. In this study, we examine the effect of two polishing abrasives—alumina and nanodiamond—on the surface finish of several CVD ZnS substrates, and on other important IR polycrystalline materials that were finished with acidic MR fluids containing these two polishing abrasives. Surface microroughness results obtained were as low as ~28 nm peak-to-valley and ~6-nm root mean square.« less
TH-CD-209-01: A Greedy Reassignment Algorithm for the PBS Minimum Monitor Unit Constraint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Y; Kooy, H; Craft, D
2016-06-15
Purpose: To investigate a Greedy Reassignment algorithm in order to mitigate the effects of low weight spots in proton pencil beam scanning (PBS) treatment plans. Methods: To convert a plan from the treatment planning system’s (TPS) to a deliverable plan, post processing methods can be used to adjust the spot maps to meets the minimum MU constraint. Existing methods include: deleting low weight spots (Cut method), or rounding spots with weight above/below half the limit up/down to the limit/zero (Round method). An alternative method called Greedy Reassignment was developed in this work in which the lowest weight spot in themore » field was removed and its weight reassigned equally among its nearest neighbors. The process was repeated with the next lowest weight spot until all spots in the field were above the MU constraint. The algorithm performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The evaluation criteria were the γ-index pass rate comparing the pre-processed and post-processed dose distributions. A planning metric was further developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. Results: For fields with a gamma pass rate of 90±1%, the metric has a standard deviation equal to 18% of the centroid value. This showed that the metric and γ-index pass rate are correlated for the Greedy Reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy Reassignment method had 1.8 times better metric at 90% pass rate compared to other post-processing methods. Conclusion: We showed that the Greedy Reassignment method yields deliverable plans that are closest to the optimized-without-MU-constraint plan from the TPS. The metric developed in this work could help design the minimum MU threshold with the goal of keeping the γ-index pass rate above an acceptable value.« less
Reformulating Constraints for Compilability and Efficiency
NASA Technical Reports Server (NTRS)
Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin
1992-01-01
KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.
NASA Astrophysics Data System (ADS)
Wessel, Paul; Luis, Joaquim F.
2017-02-01
The GMT/MATLAB toolbox is a basic interface between MATLAB® (or Octave) and GMT, the Generic Mapping Tools, which allows MATLAB users full access to all GMT modules. Data may be passed between the two programs using intermediate MATLAB structures that organize the metadata needed; these are produced when GMT modules are run. In addition, standard MATLAB matrix data can be used directly as input to GMT modules. The toolbox improves interoperability between two widely used tools in the geosciences and extends the capability of both tools: GMT gains access to the powerful computational capabilities of MATLAB while the latter gains the ability to access specialized gridding algorithms and can produce publication-quality PostScript-based illustrations. The toolbox is available on all platforms and may be downloaded from the GMT website.
New vision system and navigation algorithm for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.
2013-12-01
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.
Fixing convergence of Gaussian belief propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K; Bickson, Danny; Dolev, Danny
Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm ismore » linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.« less
Perkins, Eddie; Warren, Susan; May, Paul J
2009-08-01
The superior colliculus (SC), which directs orienting movements of both the eyes and head, is reciprocally connected to the mesencephalic reticular formation (MRF), suggesting the latter is involved in gaze control. The MRF has been provisionally subdivided to include a rostral portion, which subserves vertical gaze, and a caudal portion, which subserves horizontal gaze. Both regions contain cells projecting downstream that may provide a conduit for tectal signals targeting the gaze control centers which direct head movements. We determined the distribution of cells targeting the cervical spinal cord and rostral medullary reticular formation (MdRF), and investigated whether these MRF neurons receive input from the SC by the use of dual tracer techniques in Macaca fascicularis monkeys. Either biotinylated dextran amine or Phaseolus vulgaris leucoagglutinin was injected into the SC. Wheat germ agglutinin conjugated horseradish peroxidase was placed into the ipsilateral cervical spinal cord or medial MdRF to retrogradely label MRF neurons. A small number of medially located cells in the rostral and caudal MRF were labeled following spinal cord injections, and greater numbers were labeled in the same region following MdRF injections. In both cases, anterogradely labeled tectoreticular terminals were observed in close association with retrogradely labeled neurons. These close associations between tectoreticular terminals and neurons with descending projections suggest the presence of a trans-MRF pathway that provides a conduit for tectal control over head orienting movements. The medial location of these reticulospinal and reticuloreticular neurons suggests this MRF region may be specialized for head movement control. (c) 2009 Wiley-Liss, Inc.
Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; ...
2015-08-10
Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.
Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less
Reeve, Gordon R; Stout, Allen W; Hands, David; Curry, Emmanuel
2003-11-01
This study was undertaken to determine the impact of exposure to metal removal fluids (MRFs) on the respiratory health of exposed workers. The outcome measure selected was the rate of hospital admissions for nonmalignant respiratory disease episodes as determined from healthcare insurance claims data. A cohort of MRF-exposed employees was assembled from 11 manufacturing facilities where MRFs were extensively used in the manufacture of automotive engines, transmissions, and other machined parts. The MRF-exposed cohort included 20,434 employees of such facilities who worked at any time from 1993 through 1997. A non-MRF-exposed cohort was assembled from other employees of the same company during the same time period, but working in warehouse operations and other manufacturing facilities that did not use MRFs or any known respiratory sensitizing agents. The non-exposed cohort included 8681 employees. The crude hospital admission rate for the MRF-exposed cohort was 44 percent higher than that of the non-exposed cohort over the 5-year study period (6.67 vs. 4.62 per 1000 person years at risk, p < 0.05). With age adjustment, the MRF population's rate was still 35 percent higher, and still statistically significant. A nested case-control study was also conducted to determine whether the risk of hospital admission increased with the level of MRF exposure in the population working in MRF plants. The industrial hygiene reconstruction found the levels of exposures of both cases and controls to be very low, with the vast majority of study subjects (more than 90%) having exposures of less than 0.5 mg/m(3). The case-control study did not find any association between increased levels of MRF exposure and risk of hospitalization. The study did document an elevated risk of hospitalization among a sizable population employed in manufacturing operations where MRFs are used.
NASA Astrophysics Data System (ADS)
Ha, Jeongmok; Jeong, Hong
2016-07-01
This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.
Evaluate error correction ability of magnetorheological finishing by smoothing spectral function
NASA Astrophysics Data System (ADS)
Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin
2014-08-01
Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.
The Edge Detectors Suitable for Retinal OCT Image Segmentation
Yang, Jing; Gao, Qian; Zhou, Sheng
2017-01-01
Retinal layer thickness measurement offers important information for reliable diagnosis of retinal diseases and for the evaluation of disease development and medical treatment responses. This task critically depends on the accurate edge detection of the retinal layers in OCT images. Here, we intended to search for the most suitable edge detectors for the retinal OCT image segmentation task. The three most promising edge detection algorithms were identified in the related literature: Canny edge detector, the two-pass method, and the EdgeFlow technique. The quantitative evaluation results show that the two-pass method outperforms consistently the Canny detector and the EdgeFlow technique in delineating the retinal layer boundaries in the OCT images. In addition, the mean localization deviation metrics show that the two-pass method caused the smallest edge shifting problem. These findings suggest that the two-pass method is the best among the three algorithms for detecting retinal layer boundaries. The overall better performance of Canny and two-pass methods over EdgeFlow technique implies that the OCT images contain more intensity gradient information than texture changes along the retinal layer boundaries. The results will guide our future efforts in the quantitative analysis of retinal OCT images for the effective use of OCT technologies in the field of ophthalmology. PMID:29065594
Theory and praxis pf map analsys in CHEF part 1: Linear normal form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelotti, Leo; /Fermilab
2008-10-01
This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the pastmore » quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.« less
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168
Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions
Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas
2012-01-01
We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742
Scaling A Moment-Rate Function For Small To Large Magnitude Events
NASA Astrophysics Data System (ADS)
Archuleta, Ralph; Ji, Chen
2017-04-01
Since the 1980's seismologists have recognized that peak ground acceleration (PGA) and peak ground velocity (PGV) scale differently with magnitude for large and moderate earthquakes. In a recent paper (Archuleta and Ji, GRL 2016) we introduced an apparent moment-rate function (aMRF) that accurately predicts the scaling with magnitude of PGA, PGV, PWA (Wood-Anderson Displacement) and the ratio PGA/2πPGV (dominant frequency) for earthquakes 3.3 ≤ M ≤ 5.3. This apparent moment-rate function is controlled by two temporal parameters, tp and td, which are related to the time for the moment-rate function to reach its peak amplitude and the total duration of the earthquake, respectively. These two temporal parameters lead to a Fourier amplitude spectrum (FAS) of displacement that has two corners in between which the spectral amplitudes decay as 1/f, f denotes frequency. At higher or lower frequencies, the FAS of the aMRF looks like a single-corner Aki-Brune omega squared spectrum. However, in the presence of attenuation the higher corner is almost certainly masked. Attempting to correct the spectrum to an Aki-Brune omega-squared spectrum will produce an "apparent" corner frequency that falls between the double corner frequency of the aMRF. We reason that the two corners of the aMRF are the reason that seismologists deduce a stress drop (e.g., Allmann and Shearer, JGR 2009) that is generally much smaller than the stress parameter used to produce ground motions from stochastic simulations (e.g., Boore, 2003 Pageoph.). The presence of two corners for the smaller magnitude earthquakes leads to several questions. Can deconvolution be successfully used to determine scaling from small to large earthquakes? Equivalently will large earthquakes have a double corner? If large earthquakes are the sum of many smaller magnitude earthquakes, what should the displacement FAS look like for a large magnitude earthquake? Can a combination of such a double-corner spectrum and random vibration theory explain the PGA, PGV scaling relationships for larger magnitude?
Malet-Larrea, Amaia; Goyenechea, Estíbaliz; Gastelurrutia, Miguel A; Calvo, Begoña; García-Cárdenas, Victoria; Cabases, Juan M; Noain, Aránzazu; Martínez-Martínez, Fernando; Sabater-Hernández, Daniel; Benrimoj, Shalom I
2017-12-01
Drug related problems have a significant clinical and economic burden on patients and the healthcare system. Medication review with follow-up (MRF) is a professional pharmacy service aimed at improving patient's health outcomes through an optimization of the medication. To ascertain the economic impact of the MRF service provided in community pharmacies to aged polypharmacy patients comparing MRF with usual care, by undertaking a cost analysis and a cost-benefit analysis. The economic evaluation was based on a cluster randomized controlled trial. Patients in the intervention group (IG) received the MRF service and the comparison group (CG) received usual care. The analysis was conducted from the national health system (NHS) perspective over 6 months. Direct medical costs were included and expressed in euros at 2014 prices. Health benefits were estimated by assigning a monetary value to the quality-adjusted life years. One-way deterministic sensitivity analysis was undertaken in order to analyse the uncertainty. The analysis included 1403 patients (IG: n = 688 vs CG: n = 715). The cost analysis showed that the MRF saved 97 € per patient in 6 months. Extrapolating data to 1 year and assuming a fee for service of 22 € per patient-month, the estimated savings were 273 € per patient-year. The cost-benefit ratio revealed that for every 1 € invested in MRF, a benefit of 3.3 € to 6.2 € was obtained. The MRF provided health benefits to patients and substantial cost savings to the NHS. Investment in this service would represent an efficient use of healthcare resources.
NASA Astrophysics Data System (ADS)
Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng
2016-10-01
The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.
Zhang, R; Li, R; Zhi, L; Xu, Y; Lin, Y; Chen, L
2018-02-01
1. Muscle regulatory factors (MRFs), including Myf5, Myf6 (MRF4/herculin), MyoD and MyoG (myogenin), play pivotal roles in muscle growth and development. Therefore, they are considered as candidate genes for meat production traits in livestock and poultry. 2. The objective of this study was to investigate the expression profiles of these genes in skeletal muscles (breast muscle and thigh muscle) at 5 developmental stages (0, 81, 119, 154 and 210 d old) of Tibetan chickens. Relationships between expressions of these genes and growth and carcass traits in these chickens were also estimated. 3. The expression profiles showed that in the breast muscle of both genders the mRNA levels of MRF genes were highest on the day of hatching, then declined significantly from d 0 to d 81, and fluctuated in a certain range from d 81 to d 210. However, the expression of Myf5, Myf6 and MyoG reached peaks in the thigh muscle in 118-d-old females and for MyoD in 154-d-old females, whereas the mRNA amounts of MRF genes in the male thigh muscle were in a narrow range from d 0 to d 210. 4. Correlation analysis suggested that gender had an influence on the relationships of MRF gene expression with growth traits. The RNA levels of MyoD, Myf5 genes in male breast muscle were positively related with several growth traits of Tibetan chickens (P < 0.05). No correlation was found between expressions of MRF genes and carcass traits of the chickens. 5. These results will provide a base for functional studies of MRF genes on growth and development of Tibetan chickens, as well as selective breeding and resource exploration.
NASA Astrophysics Data System (ADS)
Galantowicz, J. F.; Picton, J.; Root, B.
2017-12-01
Passive microwave remote sensing can provided a distinct perspective on flood events by virtue of wide sensor fields of view, frequent observations from multiple satellites, and sensitivity through clouds and vegetation. During Hurricanes Harvey and Irma, we used AMSR2 (Advanced Microwave Scanning Radiometer 2, JAXA) data to map flood extents starting from the first post-storm rain-free sensor passes. Our standard flood mapping algorithm (FloodScan) derives flooded fraction from 22-km microwave data (AMSR2 or NASA's GMI) in near real time and downscales it to 90-m resolution using a database built from topography, hydrology, and Global Surface Water Explorer data and normalized to microwave data footprint shapes. During Harvey and Irma we tested experimental versions of the algorithm designed to map the maximum post-storm flood extent rapidly and made a variety of map products available immediately for use in storm monitoring and response. The maps have several unique features including spanning the entire storm-affected area and providing multiple post-storm updates as flood water shifted and receded. From the daily maps we derived secondary products such as flood duration, maximum flood extent (Figure 1), and flood depth. In this presentation, we describe flood extent evolution, maximum extent, and local details as detected by the FloodScan algorithm in the wake of Harvey and Irma. We compare FloodScan results to other available flood mapping resources, note observed shortcomings, and describe improvements made in response. We also discuss how best-estimate maps could be updated in near real time by merging FloodScan products and data from other remote sensing systems and hydrological models.
Military Retirement Fund Audited Financial Report. Fiscal Year 2013
2013-12-09
accumulates funds to finance, on an actuarial basis, the liabilities of DoD under military retirement and survivor benefit programs. Within DoD, the...for the accounting, investing, payment of benefits, and reporting of the MRF. The DoD Office of the Actuary (OACT) within OUSD(P&R) calculates the... actuarial liability of the MRF. The Office of Military Personnel Policy within OUSD(P&R) issues policy related to MRS benefits. While the MRF does
Novel high-NA MRF toolpath supports production of concave hemispheres
NASA Astrophysics Data System (ADS)
Maloney, Chris; Supranowitz, Chris; Dumas, Paul
2017-10-01
Many optical system designs rely on high numerical aperture (NA) optics, including lithography and defense systems. Lithography systems require high-NA optics to image the fine patterns from a photomask, and many defense systems require the use of domes. The methods for manufacturing such optics with large half angles have often been treated as proprietary by most manufacturers due to the challenges involved. In the past, many high-NA concave surfaces could not be polished by magnetorheological finishing (MRF) due to collisions with the hardware underneath the polishing head. By leveraging concepts that were developed to enable freeform raster MRF capabilities, QED Technologies has implemented a novel toolpath to facilitate a new high-NA rotational MRF mode. This concept involves the use of the B-axis (rotational axis) in combination with a "virtual-axis" that utilizes the geometry of the polishing head. Hardware collisions that previously restricted the concave half angle limit can now be avoided and the new functionality has been seamlessly integrated into the software. This new MRF mode overcomes past limitations for polishing concave surfaces to now accommodate full concave hemispheres as well as extend the capabilities for full convex hemispheres. We discuss some of the previous limitations, and demonstrate the extended capabilities using this novel toolpath. Polishing results are used to qualify the new toolpath to ensure similar results to the "standard" rotational MRF mode.
Jeong, Chan-Seok; Kim, Dongsup
2016-02-24
Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.
Automating the process for locating no-passing zones using georeferencing data.
DOT National Transportation Integrated Search
2012-08-01
This research created a method of using global positioning system (GPS) coordinates to identify the location of no-passing zones in two-lane highways. Analytical algorithms were developed for analyzing the availability of sight distance along the ali...
Precision production: enabling deterministic throughput for precision aspheres with MRF
NASA Astrophysics Data System (ADS)
Maloney, Chris; Entezarian, Navid; Dumas, Paul
2017-10-01
Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menapace, J A; Davis, P J; Dixit, S
2007-03-07
Over the past four years we have advanced Magnetorheological Finishing (MRF) techniques and tools to imprint complex continuously varying topographical structures onto large-aperture (430 x 430 mm) optical surfaces. These optics, known as continuous phase plates (CPPs), are important for high-power laser applications requiring precise manipulation and control of beam-shape, energy distribution, and wavefront profile. MRF's unique deterministic-sub-aperture polishing characteristics make it possible to imprint complex topographical information onto optical surfaces at spatial scale-lengths approaching 1 mm and surface peak-to-valleys as high as 22 {micro}m. During this discussion, we will present the evolution of the MRF imprinting technology and themore » MRF tools designed to manufacture large-aperture 430 x 430 mm CPPs. Our results will show how the MRF removal function impacts and limits imprint fidelity and what must be done to arrive at a high-quality surface. We also present several examples of this imprinting technology for fabrication of phase correction plates and CPPs for use in high-power laser applications.« less
Shear Stress in Magnetorheological FInishing for Glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, C.; Shafrir, S.N.; Lambropoulos, J.C.
2009-04-28
We report in situ, simultaneous measurements of both drag and normal forces in magnetorheological finishing (MRF) for what is believed to be the first time, using a spot taking machine (STM) as a test bed to take MRF spots on stationary parts. The measurements are carried out over the entire area where material is being removed, i.e., the projected area of the MRF removal function/spot on the part surface, using a dual force sensor. This approach experimentally addresses the mechanisms governing material removal in MRF for optical glasses in terms of the hydrodynamic pressure and shear stress, applied by themore » hydrodynamic flow of magnetorheological fluid at the gap between the part surface and the STM wheel. This work demonstrates that the volumetric removal rate shows a positive linear dependence on shear stress. Shear stress exhibits a positive linear dependence on a material figure of merit that depends upon Young’s modulus, fracture toughness, and hardness. A modified Preston’s equation is proposed that better estimates MRF material removal rate for optical glasses by incorporating mechanical properties, shear stress, and velocity.« less
Shear stress in magnetorheological finishing for glasses.
Miao, Chunlin; Shafrir, Shai N; Lambropoulos, John C; Mici, Joni; Jacobs, Stephen D
2009-05-01
We report in situ, simultaneous measurements of both drag and normal forces in magnetorheological finishing (MRF) for what is believed to be the first time, using a spot taking machine (STM) as a test bed to take MRF spots on stationary parts. The measurements are carried out over the entire area where material is being removed, i.e., the projected area of the MRF removal function/spot on the part surface, using a dual force sensor. This approach experimentally addresses the mechanisms governing material removal in MRF for optical glasses in terms of the hydrodynamic pressure and shear stress, applied by the hydrodynamic flow of magnetorheological fluid at the gap between the part surface and the STM wheel. This work demonstrates that the volumetric removal rate shows a positive linear dependence on shear stress. Shear stress exhibits a positive linear dependence on a material figure of merit that depends upon Young's modulus, fracture toughness, and hardness. A modified Preston's equation is proposed that better estimates MRF material removal rate for optical glasses by incorporating mechanical properties, shear stress, and velocity.
Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.
Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L
2005-05-01
This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, T; Du, K; Bayouth, J
Purpose: Four-dimensional computed tomography (4DCT) can be used to evaluate longitudinal changes in pulmonary function. The sensitivity of such measurements to identify function change may be improved with reproducible breathing patterns. The purpose of this study was to determine if inhale was more consistent than exhale, i.e., lung expansion during inhalation compared to lung contraction during exhalation. Methods: Repeat 4DCT image data acquired within a short time interval from 8 patients. Using a tissue volume preserving deformable image registration algorithm, Jacobian ventilation maps in two scanning sessions were computed and compared on the same coordinate for reproducibility analysis. Equivalent lungmore » volumes (ELV) were used for 5 subjects and equivalent title volumes (ETV) for the 3 subjects who experienced a baseline shift between scans. In addition, gamma pass rate was calculated from a modified gamma index evaluation between two ventilation maps, using acceptance criterions of 2mm distance-to-agreement and 5% ventilation difference. The gamma pass rates were then compared using paired t-test to determine if there was a significant difference. Results: Inhalation was more reproducible than exhalation. In the 5 ELV subjects 78.5% of the lung voxels met the gamma criteria for expansion during inhalation when comparing the two scans, while significantly fewer (70.9% of the lung voxels) met the gamma criteria for contraction during exhalation (p = .027). In the 8 total subjects analyzed the average gamma pass rate for expansion during inhalation was 75.2% while for contraction during exhalation it was 70.3%; which trended towards significant (p = .064). Conclusion: This work implies inhalation is more reproducible than exhalation, when equivalent respiratory volumes are considered. The reason for this difference is unknown. Longitudinal investigation of pulmonary function change based on inhalation images appears appropriate for Jacobian-based measure of lung tissue expansion. NIH Grant: R01 CA166703.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
Hierarchical probabilistic Gabor and MRF segmentation of brain tumours in MRI volumes.
Subbanna, Nagesh K; Precup, Doina; Collins, D Louis; Arbel, Tal
2013-01-01
In this paper, we present a fully automated hierarchical probabilistic framework for segmenting brain tumours from multispectral human brain magnetic resonance images (MRIs) using multiwindow Gabor filters and an adapted Markov Random Field (MRF) framework. In the first stage, a customised Gabor decomposition is developed, based on the combined-space characteristics of the two classes (tumour and non-tumour) in multispectral brain MRIs in order to optimally separate tumour (including edema) from healthy brain tissues. A Bayesian framework then provides a coarse probabilistic texture-based segmentation of tumours (including edema) whose boundaries are then refined at the voxel level through a modified MRF framework that carefully separates the edema from the main tumour. This customised MRF is not only built on the voxel intensities and class labels as in traditional MRFs, but also models the intensity differences between neighbouring voxels in the likelihood model, along with employing a prior based on local tissue class transition probabilities. The second inference stage is shown to resolve local inhomogeneities and impose a smoothing constraint, while also maintaining the appropriate boundaries as supported by the local intensity difference observations. The method was trained and tested on the publicly available MICCAI 2012 Brain Tumour Segmentation Challenge (BRATS) Database [1] on both synthetic and clinical volumes (low grade and high grade tumours). Our method performs well compared to state-of-the-art techniques, outperforming the results of the top methods in cases of clinical high grade and low grade tumour core segmentation by 40% and 45% respectively.
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...
A comprehensive numerical analysis of background phase correction with V-SHARP.
Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand
2017-04-01
Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Flattening maps for the visualization of multibranched vessels.
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2005-02-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided.
Flattening Maps for the Visualization of Multibranched Vessels
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2013-01-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided. PMID:15707245
Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar
2009-02-01
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
Weighted community detection and data clustering using message passing
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Yanchen; Zhang, Pan
2018-03-01
Grouping objects into clusters based on the similarities or weights between them is one of the most important problems in science and engineering. In this work, by extending message-passing algorithms and spectral algorithms proposed for an unweighted community detection problem, we develop a non-parametric method based on statistical physics, by mapping the problem to the Potts model at the critical temperature of spin-glass transition and applying belief propagation to solve the marginals corresponding to the Boltzmann distribution. Our algorithm is robust to over-fitting and gives a principled way to determine whether there are significant clusters in the data and how many clusters there are. We apply our method to different clustering tasks. In the community detection problem in weighted and directed networks, we show that our algorithm significantly outperforms existing algorithms. In the clustering problem, where the data were generated by mixture models in the sparse regime, we show that our method works all the way down to the theoretical limit of detectability and gives accuracy very close to that of the optimal Bayesian inference. In the semi-supervised clustering problem, our method only needs several labels to work perfectly in classic datasets. Finally, we further develop Thouless-Anderson-Palmer equations which heavily reduce the computation complexity in dense networks but give almost the same performance as belief propagation.
Optimization analysis of a new vane MRF damper
NASA Astrophysics Data System (ADS)
Zhang, J. Q.; Feng, Z. Z.; Jing, Q.
2009-02-01
The primary purpose of this study was to provide the optimization analysis certain characteristics and benefits of a vane MRF damper. Based on the structure of conventional vane hydraulic damper for heavy vehicle, a narrow arc gap between clapboard and rotary vane axle, which one rotates relative to the other, was designed for MRF valve and the mathematical model of damping was deduced. Subsequently, the finite element analysis of electromagnetic circuit was done by ANSYS to perform the optimization process. Some ways were presented to augment the damping adjustable multiple under the condition of keeping initial damping forces and to increase fluid dwell time through the magnetic field. The results show that the method is useful in the design of MR dampers and the damping adjustable range of vane MRF damper can meet the requirement of heavy vehicle semi-active suspension system.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Chen, Shaoshan; Li, Shengyi; Peng, Xiaoqiang; Hu, Hao; Tie, Guipeng
2015-02-20
A new nonaqueous and abrasive-free magnetorheological finishing (MRF) method is adopted for processing a KDP crystal. MRF polishing is easy to result in the embedding of carbonyl iron (CI) powders; meanwhile, Fe contamination on the KDP crystal surface will affect the laser induced damage threshold seriously. This paper puts forward an appropriate MRF polishing process to avoid the embedding. Polishing results show that the embedding of CI powders can be avoided by controlling the polishing parameters. Furthermore, on the KDP crystal surface, magnetorheological fluids residua inevitably exist after polishing and in which the Fe contamination cannot be removed completely by initial ultrasonic cleaning. To solve this problem, a kind of ion beam figuring (IBF) polishing is introduced to remove the impurity layer. Then the content of Fe element contamination and the depth of impurity elements are measured by time of flight secondary ion mass spectrometry. The measurement results show that there are no CI powders embedding in the MRF polished surface and no Fe contamination after the IBF polishing process, respectively. That verifies the feasibility of MRF polishing-IBF polishing (cleaning) for processing a KDP crystal.
Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation
NASA Astrophysics Data System (ADS)
Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei
2016-11-01
Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Pouliot, J
2015-06-15
Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less
Toward Magnetorheological Finishing of Magnetic Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafrir, S.N.; Lambropoulos, J.C.; Jacobs, S.D.
2007-10-24
Magnetorheological finishing (MRF) is a precision finishing process traditionally limited to processing only nonmagnetic materials, e.g., optical glasses, ceramics, polymers, and metals. Here we demonstrate that MRF can be used for material removal from magnetic material surfaces. Our approach is to place an MRF spot on machined surfaces of magnetic WC-Co materials. The resulting surface roughness is comparable to that produced on nonmagnetic materials. This spotting technique may be used to evaluate the depth of subsurface damage, or deformed layer, induced by earlier manufacturing steps, such as grinding and lapping.
Neurones associated with saccade metrics in the monkey central mesencephalic reticular formation
Cromer, Jason A; Waitzman, David M
2006-01-01
Neurones in the central mesencephalic reticular formation (cMRF) begin to discharge prior to saccades. These long lead burst neurones interact with major oculomotor centres including the superior colliculus (SC) and the paramedian pontine reticular formation (PPRF). Three different functions have been proposed for neurones in the cMRF: (1) to carry eye velocity signals that provide efference copy information to the SC (feedback), (2) to provide duration signals from the omnipause neurones to the SC (feedback), or (3) to participate in the transformation from the spatial encoding of a target selection signal in the SC into the temporal pattern of discharge used to drive the excitatory burst neurones in the pons (feed-forward). According to each respective proposal, specific predictions about cMRF neuronal discharge have been formulated. Individual neurones should: (1) encode instantaneous eye velocity, (2) burst specifically in relation to saccade duration but not to other saccade metrics, or (3) have a spectrum of weak to strong correlations to saccade dynamics. To determine if cMRF neurones could subserve these multiple oculomotor roles, we examined neuronal activity in relation to a variety of saccade metrics including amplitude, velocity and duration. We found separate groups of cMRF neurones that have the characteristics predicted by each of the proposed models. We also identified a number of subgroups for which no specific model prediction had previously been established. We found that we could accurately predict the neuronal firing pattern during one type of saccade behaviour (visually guided) using the activity during an alternative behaviour with different saccade metrics (memory guided saccades). We suggest that this evidence of a close relationship of cMRF neuronal discharge to individual saccade metrics supports the hypothesis that the cMRF participates in multiple saccade control pathways carrying saccade amplitude, velocity and duration information within the brainstem. PMID:16308353
Streaming data analytics via message passing with application to graph algorithms
Plimpton, Steven J.; Shead, Tim
2014-05-06
The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less
Energy-efficient MRF clutch avoiding no-load losses
NASA Astrophysics Data System (ADS)
Güth, Dirk; Schamoni, Markus; Maas, Jürgen
2013-04-01
A challenge opposing a commercial use of actuators like brakes and clutches based on magnetorheological fluids (MRF) are durable no-load losses. A complete torque-free separation of these actuators is inherently not yet possible due to the permanent liquid intervention for the fluid engaging parts. Especially for applications with high rotational speeds up to some thousand RPM, this drawback of MRF actuators is not acceptable. In this paper, a novel approach will be presented that allows a controlled movement of the MRF from a torque-transmitting volume of the shear gap into an inactive volume of the shear gap, enabling a complete separation of the fluid engaging surfaces. This behavior is modeled for a novel clutch design by the use of the ferrohydrodynamics and therefore simulations are performed to investigate the transitions between engaged and idle mode. Measurements performed with a realized clutch show that the viscous induced drag torque can be reduced significantly.
Effects of the gap slope on the distribution of removal rate in Belt-MRF.
Wang, Dekang; Hu, Haixiang; Li, Longxiang; Bai, Yang; Luo, Xiao; Xue, Donglin; Zhang, Xuejun
2017-10-30
Belt magnetorheological finishing (Belt-MRF) is a promising tool for large-optics processing. However, before using a spot, its shape should be designed and controlled by the polishing gap. Previous research revealed a remarkably nonlinear relationship between the removal function and normal pressure distribution. The pressure is nonlinearly related to the gap geometry, precluding prediction of the removal function given the polishing gap. Here, we used the concepts of gap slope and virtual ribbon to develop a model of removal profiles in Belt-MRF. Between the belt and the workpiece in the main polishing area, a gap which changes linearly along the flow direction was created using a flat-bottom magnet box. The pressure distribution and removal function were calculated. Simulations were consistent with experiments. Different removal functions, consistent with theoretical calculations, were obtained by adjusting the gap slope. This approach allows to predict removal functions in Belt-MRF.
Improved MRF spot characterization with QIS metrology
NASA Astrophysics Data System (ADS)
Westover, Sandi; Hall, Christopher; DeMarco, Michael
2013-09-01
Careful characterization of the removal function of sub-aperture polishing tools is critical for optimum polishing results. Magnetorheological finishing (MRF®) creates a polishing tool, or "spot", that is unique both for its locally high removal rate and high slope content. For a variety of reasons, which will be discussed, longer duration spots are beneficial to improving MRF performance, but longer spots yield higher slopes rendering them difficult to measure with adequate fidelity. QED's Interferometer for Stitching (QIS™) was designed to measure the high slope content inherent to non-null sub-aperture stitching interferometry of aspheres. Based on this unique capability the QIS was recently used to measure various MRF spots in an attempt to see if there was a corresponding improvement in MRF performance as a result of improved knowledge of these longer duration spots. The results of these tests will be presented and compared with those of a standard general purpose interferometer.
Removal rate model for magnetorheological finishing of glass.
Degroote, Jessica E; Marino, Anne E; Wilson, John P; Bishop, Amy L; Lambropoulos, John C; Jacobs, Stephen D
2007-11-10
Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magnetorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time, to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses.
Backup Attitude Control Algorithms for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Andrews, Stephen F.; Ericsson-Jackson, Aprille J.; Flatley, Thomas W.; Ward, David K.; Bay, P. Michael
1999-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The MAP spacecraft will perform its mission, studying the early origins of the universe, in a Lissajous orbit around the Earth-Sun L(sub 2) Lagrange point. Due to limited mass, power, and financial resources, a traditional reliability concept involving fully redundant components was not feasible. This paper will discuss the redundancy philosophy used on MAP, describe the hardware redundancy selected (and why), and present backup modes and algorithms that were designed in lieu of additional attitude control hardware redundancy to improve the odds of mission success. Three of these modes have been implemented in the spacecraft flight software. The first onboard mode allows the MAP Kalman filter to be used with digital sun sensor (DSS) derived rates, in case of the failure of one of MAP's two two-axis inertial reference units. Similarly, the second onboard mode allows a star tracker only mode, using attitude and derived rate from one or both of MAP's star trackers for onboard attitude determination and control. The last backup mode onboard allows a sun-line angle offset to be commanded that will allow solar radiation pressure to be used for momentum management and orbit stationkeeping. In addition to the backup modes implemented on the spacecraft, two backup algorithms have been developed in the event of less likely contingencies. One of these is an algorithm for implementing an alternative scan pattern to MAP's nominal dual-spin science mode using only one or two reaction wheels and thrusters. Finally, an algorithm has been developed that uses thruster one shots while in science mode for momentum management. This algorithm has been developed in case system momentum builds up faster than anticipated, to allow adequate momentum management while minimizing interruptions to science. In this paper, each mode and algorithm will be discussed, and simulation results presented.
Velmurugan, Natarajan; Hwang, Grim; Sathishkumar, Muthuswamy; Choi, Tae Kie; Lee, Kui-Jae; Oh, Byung-Taek; Lee, Yang-Soo
2010-01-01
A heavy metal contaminated soil sample collected from a mine in Chonnam Province of South Korea was found to be a source of heavy metal adsorbing biosorbents. Chemical analyses showed high contents of lead (Pb) at 357 mg/kg and cyanide (CN) at 14.6 mg/kg in the soil. The experimental results showed that Penicillium sp. MRF-1 was the best lead resistant fungus among the four individual metal tolerant fungal species isolated from the soil. Molecular characterization of Penicillium sp. MRF-1 was determined using ITS regions sequences. Effects of pH, temperature and contact time on adsorption of Pb(II) by Penicillium sp. MRF-1 were studied. Favorable conditions for maximum biosportion were found at pH 4 with 3 hr contact time. Biosorption of Pb(II) gradually increased with increasing temperature. Efficient performance of the biosorbent was described using Langmuir and Freundlich isotherms. Adsorption kinetics was studied using pseudo first-order and pseudo second-order models. Biosorbent Penicillium sp. MRF-1 showed the maximum desorption in alkali conditions. Consistent adsorption/desorption potential of the biosorbent in repetitive cycles validated the efficacy of it in large scale. SEM studies given notes on surface modification of fungal biomass under metal stress and FT-IR results showed the presence of amino groups in the surface structure of the biosorbent. In conclusion, the new biosorbent Penicillium sp. MRF-1 may potentially be used as an inexpensive, easily cultivatable material for the removal of lead from aqueous solution.
Endoluminal surface registration for CT colonography using haustral fold matching☆
Hampshire, Thomas; Roth, Holger R.; Helbren, Emma; Plumb, Andrew; Boone, Darren; Slabaugh, Greg; Halligan, Steve; Hawkes, David J.
2013-01-01
Computed Tomographic (CT) colonography is a technique used for the detection of bowel cancer or potentially precancerous polyps. The procedure is performed routinely with the patient both prone and supine to differentiate fixed colonic pathology from mobile faecal residue. Matching corresponding locations is difficult and time consuming for radiologists due to colonic deformations that occur during patient repositioning. We propose a novel method to establish correspondence between the two acquisitions automatically. The problem is first simplified by detecting haustral folds using a graph cut method applied to a curvature-based metric applied to a surface mesh generated from segmentation of the colonic lumen. A virtual camera is used to create a set of images that provide a metric for matching pairs of folds between the prone and supine acquisitions. Image patches are generated at the fold positions using depth map renderings of the endoluminal surface and optimised by performing a virtual camera registration over a restricted set of degrees of freedom. The intensity difference between image pairs, along with additional neighbourhood information to enforce geometric constraints over a 2D parameterisation of the 3D space, are used as unary and pair-wise costs respectively, and included in a Markov Random Field (MRF) model to estimate the maximum a posteriori fold labelling assignment. The method achieved fold matching accuracy of 96.0% and 96.1% in patient cases with and without local colonic collapse. Moreover, it improved upon an existing surface-based registration algorithm by providing an initialisation. The set of landmark correspondences is used to non-rigidly transform a 2D source image derived from a conformal mapping process on the 3D endoluminal surface mesh. This achieves full surface correspondence between prone and supine views and can be further refined with an intensity based registration showing a statistically significant improvement (p < 0.001), and decreasing mean error from 11.9 mm to 6.0 mm measured at 1743 reference points from 17 CTC datasets. PMID:23845949
Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro
2016-01-01
AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images, as compared to the reference magnetic resonance images. The difference in accuracy was statistically significant (P = 0.02). CONCLUSION: New generation CT scanner, using high resolution MPR images, represents a reliable diagnostic tool in assessment of loco-regional and whole body staging of advanced rectal cancer, especially in patients with MRI contraindications. PMID:27239115
Machine-checked proofs of the design and implementation of a fault-tolerant circuit
NASA Technical Reports Server (NTRS)
Bevier, William R.; Young, William D.
1990-01-01
A formally verified implementation of the 'oral messages' algorithm of Pease, Shostak, and Lamport is described. An abstract implementation of the algorithm is verified to achieve interactive consistency in the presence of faults. This abstract characterization is then mapped down to a hardware level implementation which inherits the fault-tolerant characteristics of the abstract version. All steps in the proof were checked with the Boyer-Moore theorem prover. A significant results is the demonstration of a fault-tolerant device that is formally specified and whose implementation is proved correct with respect to this specification. A significant simplifying assumption is that the redundant processors behave synchronously. A mechanically checked proof that the oral messages algorithm is 'optimal' in the sense that no algorithm which achieves agreement via similar message passing can tolerate a larger proportion of faulty processor is also described.
Stabilization of multiple rib fractures in a canine model.
Huang, Ke-Nan; Xu, Zhi-Fei; Sun, Ju-Xian; Ding, Xin-Yu; Wu, Bin; Li, Wei; Qin, Xiong; Tang, Hua
2014-12-01
Operative stabilization is frequently used in the clinical treatment of multiple rib fractures (MRF); however, no ideal material exists for use in this fixation. This study investigates a newly developed biodegradable plate system for the stabilization of MRF. Silk fiber-reinforced polycaprolactone (SF/PCL) plates were developed for rib fracture stabilization and studied using a canine flail chest model. Adult mongrel dogs were divided into three groups: one group received the SF/PCL plates, one group received standard clinical steel plates, and the final group did not undergo operative fracture stabilization (n = 6 for each group). Radiographic, mechanical, and histologic examination was performed to evaluate the effectiveness of the biodegradable material for the stabilization of the rib fractures. No nonunion and no infections were found when using SF-PCL plates. The fracture sites collapsed in the untreated control group, leading to obvious chest wall deformity not encountered in the two groups that underwent operative stabilization. Our experimental study shows that the SF/PCL plate has the biocompatibility and mechanical strength suitable for fixation of MRF and is potentially ideal for the treatment of these injuries. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sung, Yun Kyung; Ahn, Byung Wook; Kang, Tae Jin
2012-03-01
One-dimensional magnetic nanostructures have recently attracted much attention because of their intriguing properties that are not realized by their bulk or particle form. These nanostructures are potentially useful for the application to ultrahigh-density data storages, sensors and bulletproof vest. The magnetic particles in magnetic nanofibers of blend types cannot fully align along the external magnetic field because magnetic particles are arrested in solid polymer matrix. To improve the mobility of magnetic particles, we used magneto-rheological fluid (MRF), which has the good mobility and dispersibility. Superparamagnetic core/sheath composite nanofibers were obtained with MRF and poly (ethylene terephthalate) (PET) solution via a coaxial electrospinning technique. Coaxial electrospinning is suited for fabricating core/sheath nanofibers encapsulating MRF materials within a polymer sheath. The magnetic nanoparticles in MRF were dispersed within core part of the nanofibers. The core/sheath magnetic composite nanofibers exhibited superparamagnetic behavior at room temperature and the magnetic nanoparticles in MRF well responded to an applied magnetic field. Also, the mechanical properties of the nanofiber were improved in the magnetic field. This study aimed to fabricate core/sheath magnetic composite nanofibers using coaxial electrospinning and characterize the magnetic as well as mechanical properties of composite nanofibers.
OnEarth: An Open Source Solution for Efficiently Serving High-Resolution Mapped Image Products
NASA Astrophysics Data System (ADS)
Thompson, C. K.; Plesea, L.; Hall, J. R.; Roberts, J. T.; Cechini, M. F.; Schmaltz, J. E.; Alarcon, C.; Huang, T.; McGann, J. M.; Chang, G.; Boller, R. A.; Ilavajhala, S.; Murphy, K. J.; Bingham, A. W.
2013-12-01
This presentation introduces OnEarth, a server side software package originally developed at the Jet Propulsion Laboratory (JPL), that facilitates network-based, minimum-latency geolocated image access independent of image size or spatial resolution. The key component in this package is the Meta Raster Format (MRF), a specialized raster file extension to the Geospatial Data Abstraction Library (GDAL) consisting of an internal indexed pyramid of image tiles. Imagery to be served is converted to the MRF format and made accessible online via an expandable set of server modules handling requests in several common protocols, including the Open Geospatial Consortium (OGC) compliant Web Map Tile Service (WMTS) as well as Tiled WMS and Keyhole Markup Language (KML). OnEarth has recently transitioned to open source status and is maintained and actively developed as part of GIBS (Global Imagery Browse Services), a collaborative project between JPL and Goddard Space Flight Center (GSFC). The primary function of GIBS is to enhance and streamline the data discovery process and to support near real-time (NRT) applications via the expeditious ingestion and serving of full-resolution imagery representing science products from across the NASA Earth Science spectrum. Open source software solutions are leveraged where possible in order to utilize existing available technologies, reduce development time, and enlist wider community participation. We will discuss some of the factors and decision points in transitioning OnEarth to a suitable open source paradigm, including repository and licensing agreement decision points, institutional hurdles, and perceived benefits. We will also provide examples illustrating how OnEarth is integrated within GIBS and other applications.
Doble, Brett; Lorgelly, Paula
2016-04-01
To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.
Development of a Two-Wheel Contingency Mode for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
Starin, Scott R.; ODonnell, James R., Jr.; Bauer, Frank (Technical Monitor)
2002-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on mission to the Cosmic Background Explorer (COBE), and is currently collecting data from its orbit near the second Sun-Earth libration point. Due to limited mass, power, and financial resources, a traditional reliability concept including fully redundant components was not feasible for MAP. Instead, the MAP design employs selective hardware redundancy in tandem with contingency software modes and algorithms to improve the odds of mission success. One direction for such improvement has been the development of a two-wheel backup control strategy. This strategy would allow MAP to position itself for maneuvers and collect science data should one of its three reaction wheels fail. Along with operational considerations, the strategy includes three new control algorithms. These algorithms would use the remaining attitude control actuators-thrusters and two reaction wheels-in ways that achieve control goals while minimizing adverse impacts on the functionality of other subsystems and software.
Bohlen, Martin O; Warren, Susan; May, Paul J
2017-06-01
We recently demonstrated a bilateral projection to the supraoculomotor area from the central mesencephalic reticular formation (cMRF), a region implicated in horizontal gaze changes. C-group motoneurons, which supply multiply innervated fibers in the medial rectus muscle, are located within the primate supraoculomotor area, but their inputs and function are poorly understood. Here, we tested whether C-group motoneurons in Macaca fascicularis monkeys receive a direct cMRF input by injecting this portion of the reticular formation with anterograde tracers in combination with injection of retrograde tracer into the medial rectus muscle. The results indicate that the cMRF provides a dense, bilateral projection to the region of the medial rectus C-group motoneurons. Numerous close associations between labeled terminals and each multiply innervated fiber motoneuron were present. Within the oculomotor nucleus, a much sparser ipsilateral projection onto some of the A- and B- group medial rectus motoneurons that supply singly innervated fibers was observed. Ultrastructural analysis demonstrated a direct synaptic linkage between anterogradely labeled reticular terminals and retrogradely labeled medial rectus motoneurons in all three groups. These findings reinforce the notion that the cMRF is a critical hub for oculomotility by proving that it contains premotor neurons supplying horizontal extraocular muscle motoneurons. The differences between the cMRF input patterns for C-group versus A- and B-group motoneurons suggest the C-group motoneurons serve a different oculomotor role than the others. The similar patterns of cMRF input to C-group motoneurons and preganglionic Edinger-Westphal motoneurons suggest that medial rectus C-group motoneurons may play a role in accommodation-related vergence. © 2017 Wiley Periodicals, Inc.
Balosso, Jacques
2017-01-01
Background During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type ‘a’ algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type ‘b’, which consider change in lateral electrons transport. Methods Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type ‘b’ / OED type ‘a’). Results The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type ‘a’, the OED values from type ‘b’ dose distributions’ were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. Conclusions The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing more precisely the dose distributions, but that the prediction of absolute SCR is still very imprecise, only the EAR ratio could be used to rank radiotherapy plans. PMID:28811995
Improved Edge Performance in MRF
NASA Technical Reports Server (NTRS)
Shorey, Aric; Jones, Andrew; Durnas, Paul; Tricard, Marc
2004-01-01
The fabrication of large segmented optics requires a polishing process that can correct the figure of a surface to within a short distance from its edges-typically, a few millimeters. The work here is to develop QED's Magnetorheological Finishing (MRF) precision polishing process to minimize residual edge effects.
Removal Rate Model for Magnetorheological Finishing of Glass
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeGroote, J.E.; Marino, A.E.; WIlson, J.P.
2007-11-14
Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magntorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time,more » to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses.« less
Process parameter effects on material removal in magnetorheological finishing of borosilicate glass.
Miao, Chunlin; Lambropoulos, John C; Jacobs, Stephen D
2010-04-01
We investigate the effects of processing parameters on material removal for borosilicate glass. Data are collected on a magnetorheological finishing (MRF) spot taking machine (STM) with a standard aqueous magnetorheological (MR) fluid. Normal and shear forces are measured simultaneously, in situ, with a dynamic dual load cell. Shear stress is found to be independent of nanodiamond concentration, penetration depth, magnetic field strength, and the relative velocity between the part and the rotating MR fluid ribbon. Shear stress, determined primarily by the material mechanical properties, dominates removal in MRF. The addition of nanodiamond abrasives greatly enhances the material removal efficiency, with the removal rate saturating at a high abrasive concentration. The volumetric removal rate (VRR) increases with penetration depth but is insensitive to magnetic field strength. The VRR is strongly correlated with the relative velocity between the ribbon and the part, as expected by the Preston equation. A modified removal rate model for MRF offers a better estimation of MRF removal capability by including nanodiamond concentration and penetration depth.
Development of magneto-rheologial fluid (MRF) based clutch for output torque control of AC motors
NASA Astrophysics Data System (ADS)
Nguyen, Q. Hung; Do, H. M. Hieu; Nguyen, V. Quoc; Nguyen, N. Diep; Le, D. Thang
2018-03-01
In industry, the AC motor is widely used because of low price, power availability, low cost maintenance. The main disadvantages of AC motors compared to DC motors are difficulty in speed and torque control, requiring expensive controllers with complex control algorithms. This is the basic limitations in the widespread adoption of AC motor systems for industrial automation. One feasible solution for AC motor control is using MRF (magneto-rheological fluid) based clutches (shortly called MR clutches) Although there have been many studies on MR clutches, most of these clutches used traditional configuration with coils wound on the middle cylindrical part and a compotator is used to supply power to the coils. Therefore, this type of MR clutches possesses many disadvantages such as high friction and unstable applied current due to commutator, complex structure which causes difficulty in manufacture, assembly, and maintenance. In addition, the bottleneck problem of magnetic field is also a challenging issue. In this research, we will develop a new type of MR clutches that overcomes the abovementioned disadvantages of traditional MR clutches and more suitable for application in controlling of AC motor. Besides, in this study, speed and torque control system for AC motors using developed MR clutches is designed and experimental validated.
Simon, H G; Nelson, C; Goff, D; Laufer, E; Morgan, B A; Tabin, C
1995-01-01
An amputated limb of an adult urodele amphibian is capable of undergoing regeneration. The new structures form from an undifferentiated mass of cells called the regenerative blastema. The cells of the blastema are believed to derive from differentiated tissues of the adult limb. However, the exact source of these cells and the process by which they undergo dedifferentiation are poorly understood. In order to elucidate the molecular and cellular basis for dedifferentiation we isolated a number of genes which are potential regulators of the process. These include Msx-1, which is believed to support the undifferentiated and proliferative state of cells in the embryonic limb bud; and two members of the myogenic regulatory gene family, MRF-4 and Myf-5, which are expressed in differentiated muscle and regulate muscle-specific gene activity. As anticipated, we find that Msx-1 is strongly up-regulated during the initiation of regeneration. It remains expressed throughout regeneration but is not found in the fully regenerated limb. The myogenic gene MRF-4 has the reverse expression pattern. It is expressed in adult limb muscle, is rapidly shut off in early regenerative blastemas, and is only reexpressed at the completion of regeneration. These kinetics are paralleled by those of a muscle-specific Myosin gene. In contrast Myf-5, a second member of the myogenic gene family, continues to be expressed throughout the regenerative process. Thus, MRF-4 and Myf-5 are likely to play distinct roles during regeneration. MRF-4 may directly regulate muscle phenotype and as such its repression may be a key event in dedifferentiation.(ABSTRACT TRUNCATED AT 250 WORDS)
Dual energy approach for cone beam artifacts correction
NASA Astrophysics Data System (ADS)
Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk
2017-03-01
Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.
Spot breeding method to evaluate the determinism of magnetorheological finishing
NASA Astrophysics Data System (ADS)
Yang, Hang; He, Jianguo; Huang, Wen; Zhang, Yunfei
2017-03-01
The influences of immersion depth of magnetorheological finishing (MRF) on the shape and material removal rate (MRR) of removal function are theoretically investigated to establish the spot transition mechanism. Based on this mechanism, for the first time, the spot breeding method to predict the shape and removal rate of MRF spot is proposed. The UBK7 optical parts are polished to verify the proposed method on experimental installation PKC-1000Q2 developed by ourselves. The experimental results reveal that the predictions of shape and MRR with this method are precise. The proposed method provides a basis for analyzing the determinism of MRF due to geometry of the process.
Conical Perspective Image of an Architectural Object Close to Human Perception
NASA Astrophysics Data System (ADS)
Dzwierzynska, Jolanta
2017-10-01
The aim of the study is to develop a method of computer aided constructing conical perspective of an architectural object, which is close to human perception. The conical perspective considered in the paper is a central projection onto a projection surface being a conical rotary surface or a fragment of it. Whereas, the centre of projection is a stationary point or a point moving on a circular path. The graphical mapping results of the perspective representation is realized directly on an unrolled flat projection surface. The projective relation between a range of points on a line and the perspective image of the same range of points received on a cylindrical projection surface permitted to derive formulas for drawing perspective. Next, the analytical algorithms for drawing perspective image of a straight line passing through any two points were formulated. It enabled drawing a perspective wireframe image of a given 3D object. The use of the moving view point as well as the application of the changeable base elements of perspective as the variables in the algorithms enable drawing conical perspective from different viewing positions. Due to this fact, the perspective drawing method is universal. The algorithms are formulated and tested in Mathcad Professional software, but can be implemented in AutoCAD and majority of computer graphical packages, which makes drawing a perspective image more efficient and easier. The presented conical perspective representation, and the convenient method of its mapping directly on the flat unrolled surface can find application for numerous advertisement and art presentations.
Numerical Conformal Mapping Using Cross-Ratios and Delaunay Triangulation
NASA Technical Reports Server (NTRS)
Driscoll, Tobin A.; Vavasis, Stephen A.
1996-01-01
We propose a new algorithm for computing the Riemann mapping of the unit disk to a polygon, also known as the Schwarz-Christoffel transformation. The new algorithm, CRDT, is based on cross-ratios of the prevertices, and also on cross-ratios of quadrilaterals in a Delaunay triangulation of the polygon. The CRDT algorithm produces an accurate representation of the Riemann mapping even in the presence of arbitrary long, thin regions in the polygon, unlike any previous conformal mapping algorithm. We believe that CRDT can never fail to converge to the correct Riemann mapping, but the correctness and convergence proof depend on conjectures that we have so far not been able to prove. We demonstrate convergence with computational experiments. The Riemann mapping has applications to problems in two-dimensional potential theory and to finite-difference mesh generation. We use CRDT to produce a mapping and solve a boundary value problem on long, thin regions for which no other algorithm can solve these problems.
Fiscal Year 2014: Military Retirement Fund Audited Financial Report
2014-11-07
Reserve Retirement ................................................................................................................. 7 Survivor...benefits for military members’ retirement from active duty and the reserves , disability retirement benefits, and survivor benefits. The MRF accumulates... premium /discount amortization and accrued inflation compensation. In comparison, in FY 2013 the MRF received approximately $20.5 billion in normal cost
Chen, Mingjun; Liu, Henan; Cheng, Jian; Yu, Bo; Fang, Zhen
2017-07-01
In order to achieve the deterministic finishing of optical components with concave surfaces of a curvature radius less than 10 mm, a novel magnetorheological finishing (MRF) process using a small ball-end permanent-magnet polishing head with a diameter of 4 mm is introduced. The characteristics of material removal in the proposed MRF process are studied. The model of the material removal function for the proposed MRF process is established based on the three-dimensional hydrodynamics analysis and Preston's equation. The shear stress on the workpiece surface is calculated by means of resolving the presented mathematical model using a numerical solution method. The analysis result reveals that the material removal in the proposed MRF process shows a positive dependence on shear stress. Experimental research is conducted to investigate the effect of processing parameters on the material removal rate and improve the surface accuracy of a typical rotational symmetrical optical component. The experimental results show that the surface accuracy of the finished component of K9 glass material has been improved to 0.14 μm (PV) from the initial 0.8 μm (PV), and the finished surface roughness Ra is 0.0024 μm. It indicates that the proposed MRF process can be used to achieve the deterministic removal of surface material and perform the nanofinishing of small curvature radius concave surfaces.
Optimal Magnetorheological Fluid for Finishing of Chemical-Vapor-Deposited Zinc Sulfide
NASA Astrophysics Data System (ADS)
Salzman, Sivan
Magnetorheological finishing (MRF) of polycrystalline, chemical-vapor- deposited zinc sulfide (ZnS) optics leaves visible surface artifacts known as "pebbles". These artifacts are a direct result of the material's inner structure that consists of cone-like features that grow larger (up to a few millimeters in size) as deposition takes place, and manifest on the top deposited surface as "pebbles". Polishing the pebble features from a CVD ZnS substrate to a flat, smooth surface to below 10 nm root-mean-square is challenging, especially for a non-destructive polishing process such as MRF. This work explores ways to improve the surface finish of CVD ZnS processed with MRF through modification of the magnetorheological (MR) fluid's properties. A materials science approach is presented to define the anisotropy of CVD ZnS through a combination of chemical and mechanical experiments and theoretical predictions. Magnetorheological finishing experiments with single crystal samples of ZnS, whose cuts and orientations represent most of the facets known to occur in the polycrystalline CVD ZnS, were performed to explore the influence of material anisotropy on the material removal rate during MRF. By adjusting the fluid's viscosity, abrasive type concentration, and pH to find the chemo-mechanical conditions that equalize removal rates among all single crystal facets during MRF, we established an optimized, novel MR formulation to polish CVD ZnS without degrading the surface finish of the optic.
NASA Astrophysics Data System (ADS)
Zhong, Xianyun; Hou, Xi; Yang, Jinshan
2016-09-01
Nickel is the unique material in the X-ray telescopes. And it has the typical soft material characteristics with low hardness high surface damage and low stability of thermal. The traditional fabrication techniques are exposed to lots of problems, including great surface scratches, high sub-surface damage and poor surface roughness and so on. The current fabrication technology for the nickel aspheric mainly adopt the single point diamond turning(SPDT), which has lots of advantages such as high efficiency, ultra-precision surface figure, low sub-surface damage and so on. But the residual surface texture of SPDT will cause great scattering losses and fall far short from the requirement in the X-ray applications. This paper mainly investigates the magnetorheological finishing (MRF) techniques for the super-smooth processing on the nickel optics. Through the study of the MRF polishing techniques, we obtained the ideal super-smooth polishing technique based on the self-controlled MRF-fluid NS-1, and finished the high-precision surface figure lower than RMS λ/80 (λ=632.8nm) and super-smooth roughness lower than Ra 0.3nm on the plane reflector and roughness lower than Ra 0.4nm on the convex cone. The studying of the MRF techniques makes a great effort to the state-of-the-art nickel material processing level for the X-ray optical systems applications.
NASA Astrophysics Data System (ADS)
Liu, Zeyu; Xia, Tiecheng; Wang, Jinbo
2018-03-01
We propose a new fractional two-dimensional triangle function combination discrete chaotic map (2D-TFCDM) with the discrete fractional difference. Moreover, the chaos behaviors of the proposed map are observed and the bifurcation diagrams, the largest Lyapunov exponent plot, and the phase portraits are derived, respectively. Finally, with the secret keys generated by Menezes–Vanstone elliptic curve cryptosystem, we apply the discrete fractional map into color image encryption. After that, the image encryption algorithm is analyzed in four aspects and the result indicates that the proposed algorithm is more superior than the other algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61072147 and 11271008).
Single-pass incremental force updates for adaptively restrained molecular dynamics.
Singh, Krishna Kant; Redon, Stephane
2018-03-30
Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
NASA Astrophysics Data System (ADS)
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
Clustering Methods; Part IV of Scientific Report No. ISR-18, Information Storage and Retrieval...
ERIC Educational Resources Information Center
Cornell Univ., Ithaca, NY. Dept. of Computer Science.
Two papers are included as Part Four of this report on Salton's Magical Automatic Retriever of Texts (SMART) project report. The first paper: "A Controlled Single Pass Classification Algorithm with Application to Multilevel Clustering" by D. B. Johnson and J. M. Laferente presents a single pass clustering method which compares favorably…
Fatyga, Mirek; Dogan, Nesrin; Weiss, Elizabeth; Sleeman, William C; Zhang, Baoshe; Lehman, William J; Williamson, Jeffrey F; Wijesooriya, Krishni; Christensen, Gary E
2015-01-01
Commonly used methods of assessing the accuracy of deformable image registration (DIR) rely on image segmentation or landmark selection. These methods are very labor intensive and thus limited to relatively small number of image pairs. The direct voxel-by-voxel comparison can be automated to examine fluctuations in DIR quality on a long series of image pairs. A voxel-by-voxel comparison of three DIR algorithms applied to lung patients is presented. Registrations are compared by comparing volume histograms formed both with individual DIR maps and with a voxel-by-voxel subtraction of the two maps. When two DIR maps agree one concludes that both maps are interchangeable in treatment planning applications, though one cannot conclude that either one agrees with the ground truth. If two DIR maps significantly disagree one concludes that at least one of the maps deviates from the ground truth. We use the method to compare 3 DIR algorithms applied to peak inhale-peak exhale registrations of 4DFBCT data obtained from 13 patients. All three algorithms appear to be nearly equivalent when compared using DICE similarity coefficients. A comparison based on Jacobian volume histograms shows that all three algorithms measure changes in total volume of the lungs with reasonable accuracy, but show large differences in the variance of Jacobian distribution on contoured structures. Analysis of voxel-by-voxel subtraction of DIR maps shows differences between algorithms that exceed a centimeter for some registrations. Deformation maps produced by DIR algorithms must be treated as mathematical approximations of physical tissue deformation that are not self-consistent and may thus be useful only in applications for which they have been specifically validated. The three algorithms tested in this work perform fairly robustly for the task of contour propagation, but produce potentially unreliable results for the task of DVH accumulation or measurement of local volume change. Performance of DIR algorithms varies significantly from one image pair to the next hence validation efforts, which are exhaustive but performed on a small number of image pairs may not reflect the performance of the same algorithm in practical clinical situations. Such efforts should be supplemented by validation based on a longer series of images of clinical quality.
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067
Pei, Yan
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.
Point Cloud Based Approach to Stem Width Extraction of Sorghum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Jihui; Zakhor, Avideh
A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less
Point Cloud Based Approach to Stem Width Extraction of Sorghum
Jin, Jihui; Zakhor, Avideh
2017-01-29
A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less
The MAP Spacecraft Angular State Estimation After Sensor Failure
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2003-01-01
This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, the conclusions have a far reaching consequence.
The Effect of Sensor Failure on the Attitude and Rate Estimation of MAP Spacecraft
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2003-01-01
This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, its conclusions are more general.
Hernandez, Penni; Podchiyska, Tanya; Weber, Susan; Ferris, Todd; Lowe, Henry
2009-11-14
The Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse integrates medication information from two Stanford hospitals that use different drug representation systems. To merge this pharmacy data into a single, standards-based model supporting research we developed an algorithm to map HL7 pharmacy orders to RxNorm concepts. A formal evaluation of this algorithm on 1.5 million pharmacy orders showed that the system could accurately assign pharmacy orders in over 96% of cases. This paper describes the algorithm and discusses some of the causes of failures in mapping to RxNorm.
Peano-like paths for subaperture polishing of optical aspherical surfaces.
Tam, Hon-Yuen; Cheng, Haobo; Dong, Zhichao
2013-05-20
Polishing can be more uniform if the polishing path provides uniform coverage of the surface. It is known that Peano paths can provide uniform coverage of planar surfaces. Peano paths also contain short path segments and turns: (1) all path segments have the same length, (2) path segments are mutually orthogonal at the turns, and (3) path segments and turns are uniformity distributed over the domain surface. These make Peano paths an attractive candidate among polishing tool paths because they enhance multidirectional approaches of the tool to each surface location. A method for constructing Peano paths for uniform coverage of aspherical surfaces is proposed in this paper. When mapped to the aspherical surface, the path also contains short path segments and turns, and the above attributes are approximately preserved. Attention is paid so that the path segments are still well distributed near the vertex of the surface. The proposed tool path was used in the polishing of a number of parabolic BK7 specimens using magnetorheological finishing (MRF) and pitch with cerium oxide. The results were rather good for optical lenses and confirm that a Peano-like path was useful for polishing, for MRF, and for pitch polishing. In the latter case, the surface roughness achieved was 0.91 nm according to WYKO measurement.
Magnetorheological finishing: a perfect solution to nanofinishing requirements
NASA Astrophysics Data System (ADS)
Sidpara, Ajay
2014-09-01
Finishing of optics for different applications is the most important as well as difficult step to meet the specification of optics. Conventional grinding or other polishing processes are not able to reduce surface roughness beyond a certain limit due to high forces acting on the workpiece, embedded abrasive particles, limited control over process, etc. Magnetorheological finishing (MRF) process provides a new, efficient, and innovative way to finish optical materials as well many metals to their desired level of accuracy. This paper provides an overview of MRF process for different applications, important process parameters, requirement of magnetorheological fluid with respect to workpiece material, and some areas that need to be explored for extending the application of MRF process.
NASA Astrophysics Data System (ADS)
Du, Hang; Song, Ci; Li, Shengyi
2018-01-01
In order to obtain high precision and high surface quality silicon carbide mirrors, the silicon carbide mirror substrate is subjected to surface modification treatment. In this paper, the problem of Silicon Carbide (SiC) mirror surface roughness deterioration by MRF is studied. The reasons of surface flaws of “Comet tail” are analyzed. Influence principle of MRF polishing depth and the surface roughness of modified SiC mirrors is obtained by experiments. On this basis, the united process of modified SiC mirrors is proposed which is combined MRF with the small grinding head CCOS. The united process makes improvement in the surface accuracy and surface roughness of modified SiC mirrors.
MacDonnell, M F
1984-01-01
The midline ridge formation (MRF) of the trigeminal complex in 127 cartilaginous fish of 15 species was examined by scanning electron microscopy or light microscopy. Five distinct species variations of the MRF in sharks are described. The formation has not yet been observed to be present in skates and rays, but its presence in the subclass Holocephali, the sister group to the Elasmobranchii, indicates that this proposed circumventricular organ is an ancient brain characteristic of this line of vertebrates, perhaps predating the emergence of the class Chondrichthyii. The different types of MRF are compared to a current phyletic organization of the elasmobranchs and the possible functional significance of the formation is discussed briefly.
Medium-range fire weather forecasts
J.O. Roads; K. Ueyoshi; S.C. Chen; J. Alpert; F. Fujioka
1991-01-01
The forecast skill of theNational Meteorological Center's medium range forecast (MRF) numerical forecasts of fire weather variables is assessed for the period June 1,1988 to May 31,1990. Near-surface virtual temperature, relative humidity, wind speed and a derived fire weather index (FWI) are forecast well by the MRF model. However, forecast relative humidity has...
Cross-Disciplinary Collaboration to Engage Diverse Researchers
ERIC Educational Resources Information Center
Loveless, Douglas J.; Sturm, Debbie C.; Guo, Chengqi; Tanaka, Kimiko; Zha, Shenghua; Berkeley, Elizabeth V.
2013-01-01
Grounded as a self-study using arts-based inquiry to explore the experiences of six university faculty members participating in a cross-disciplinary faculty development program, the purpose of this paper is to (1) describe the Madison Research Fellows (MRF) program, and (2) explore the impact of the MRF program. Participating members included…
3D and 4D magnetic susceptibility tomography based on complex MR images
Chen, Zikuan; Calhoun, Vince D
2014-11-11
Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.
Effect of different hardness nanoparticles on friction properties of magnetorheological fluids
NASA Astrophysics Data System (ADS)
Zhao, Mingmei; Zhang, Jinqiu; Yao, Jun
2017-10-01
Magnetorheological fluids (MRFs) exhibit different wear performance when nanoparticles with different hardness are added. In this study, three solid particles with different hardness are considered to study the variation in MRF performance. The friction and wear properties of the MRF are measured by using a four-ball friction and wear tester, and the surface of the steel ball was observed using a three-dimensional white light interferometer. Also, the rheological properties of MRF are tested by using an Anton-Paar rheometer. The results show that the addition of graphite yields a stable friction process and does not degrade the rheological properties of MRF. Nano-diamond increases the shear yield strength and reduces the wall slip to a greater extent. However, the wear is more serious in this case. Copper particles are unstable, and their surface activity is too high to get adsorbed on the surface of iron powder aggravating the settlement rate. The above three MRFs with different kinds of nano-particles present a more regular grinding spot, and the nano-particles have a certain repair function to the surface.
NASA Astrophysics Data System (ADS)
Nguyen, Q. H.; Choi, S. B.; Lee, Y. S.; Han, M. S.
2013-11-01
This paper focuses on the optimal design of a compact and high damping force engine mount featuring magnetorheological fluid (MRF). In the mount, a MR valve structure with both annular and radial flows is employed to generate a high damping force. First, the configuration and working principle of the proposed MR mount is introduced. The MRF flows in the mount are then analyzed and the governing equations of the MR mount are derived based on the Bingham plastic behavior of the MRF. An optimal design of the MR mount is then performed to find the optimal structure of the MR valve to generate a maximum damping force with certain design constraints. In addition, the gap size of MRF ducts is empirically chosen considering the ‘lockup’ problem of the mount at high frequency. Performance of the optimized MR mount is then evaluated based on finite element analysis and discussions on performance results of the optimized MR mount are given. The effectiveness of the proposed MR engine mount is demonstrated via computer simulation by presenting damping force and power consumption.
Process Parameter Effects on Material Removal in Magnetorheological Finishing of Borosilicate Glass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, C.; Lambroopulos, J.C.; Jacobs, S.D.
2010-04-14
We investigate the effects of processing parameters on material removal for borosilicate glass. Data are collected on a magnetorheological finishing (MRF) spot taking machine (STM) with a standard aqueous magnetorheological (MR) fluid. Normal and shear forces are measured simultaneously, in situ, with a dynamic dual load cell. Shear stress is found to be independent of nanodiamond concentration, penetration depth, magnetic field strength, and the relative velocity between the part and the rotating MR fluid ribbon. Shear stress, determined primarily by the material mechanical properties, dominates removal in MRF. The addition of nanodiamond abrasives greatly enhances the material removal efficiency, withmore » the removal rate saturating at a high abrasive concentration. The volumetric removal rate (VRR) increases with penetration depth but is insensitive to magnetic field strength. The VRR is strongly correlated with the relative velocity between the ribbon and the part, as expected by the Preston equation. A modified removal rate model for MRF offers a better estimation of MRF removal capability by including nanodiamond concentration and penetration depth.« less
Towards predicting the encoding capability of MR fingerprinting sequences.
Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P
2017-09-01
Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
Cohen, Ouri; Huang, Shuning; McMahon, Michael T; Rosen, Matthew S; Farrar, Christian T
2018-05-13
To develop a fast magnetic resonance fingerprinting (MRF) method for quantitative chemical exchange saturation transfer (CEST) imaging. We implemented a CEST-MRF method to quantify the chemical exchange rate and volume fraction of the N α -amine protons of L-arginine (L-Arg) phantoms and the amide and semi-solid exchangeable protons of in vivo rat brain tissue. L-Arg phantoms were made with different concentrations (25-100 mM) and pH (pH 4-6). The MRF acquisition schedule varied the saturation power randomly for 30 iterations (phantom: 0-6 μT; in vivo: 0-4 μT) with a total acquisition time of ≤2 min. The signal trajectories were pattern-matched to a large dictionary of signal trajectories simulated using the Bloch-McConnell equations for different combinations of exchange rate, exchangeable proton volume fraction, and water T 1 and T 2 relaxation times. The chemical exchange rates of the N α -amine protons of L-Arg were significantly (P < 0.0001) correlated with the rates measured with the quantitation of exchange using saturation power method. Similarly, the L-Arg concentrations determined using MRF were significantly (P < 0.0001) correlated with the known concentrations. The pH dependence of the exchange rate was well fit (R 2 = 0.9186) by a base catalyzed exchange model. The amide proton exchange rate measured in rat brain cortex (34.8 ± 11.7 Hz) was in good agreement with that measured previously with the water exchange spectroscopy method (28.6 ± 7.4 Hz). The semi-solid proton volume fraction was elevated in white (12.2 ± 1.7%) compared to gray (8.1 ± 1.1%) matter brain regions in agreement with previous magnetization transfer studies. CEST-MRF provides a method for fast, quantitative CEST imaging. © 2018 International Society for Magnetic Resonance in Medicine.
Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography
2017-05-01
contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human
Meng, Xiaosong; Rosenkrantz, Andrew B; Mendhiratta, Neil; Fenstermaker, Michael; Huang, Richard; Wysock, James S; Bjurlin, Marc A; Marshall, Susan; Deng, Fang-Ming; Zhou, Ming; Melamed, Jonathan; Huang, William C; Lepor, Herbert; Taneja, Samir S
2016-03-01
Increasing evidence supports the use of magnetic resonance imaging (MRI)-ultrasound fusion-targeted prostate biopsy (MRF-TB) to improve the detection of clinically significant prostate cancer (PCa) while limiting detection of indolent disease compared to systematic 12-core biopsy (SB). To compare MRF-TB and SB results and investigate the relationship between biopsy outcomes and prebiopsy MRI. Retrospective analysis of a prospectively acquired cohort of men presenting for prostate biopsy over a 26-mo period. A total of 601 of 803 consecutively eligible men were included. All men were offered prebiopsy MRI and assigned a maximum MRI suspicion score (mSS). Men with an MRI abnormality underwent combined MRF-TB and SB. Detection rates for all PCa and high-grade PCa (Gleason score [GS] ≥7) were compared using the McNemar test. MRF-TB detected fewer GS 6 PCas (75 vs 121; p<0.001) and more GS ≥7 PCas (158 vs 117; p<0.001) than SB. Higher mSS was associated with higher detection of GS ≥7 PCa (p<0.001) but was not correlated with detection of GS 6 PCa. Prediction of GS ≥7 disease by mSS varied according to biopsy history. Compared to SB, MRF-TB identified more GS ≥7 PCas in men with no prior biopsy (88 vs 72; p=0.012), in men with a prior negative biopsy (28 vs 16; p=0.010), and in men with a prior cancer diagnosis (42 vs 29; p=0.043). MRF-TB detected fewer GS 6 PCas in men with no prior biopsy (32 vs 60; p<0.001) and men with prior cancer (30 vs 46; p=0.034). Limitations include the retrospective design and the potential for selection bias given a referral population. MRF-TB detects more high-grade PCas than SB while limiting detection of GS 6 PCa in men presenting for prostate biopsy. These findings suggest that prebiopsy multiparametric MRI and MRF-TB should be considered for all men undergoing prostate biopsy. In addition, mSS in conjunction with biopsy indications may ultimately help in identifying men at low risk of high-grade cancer for whom prostate biopsy may not be warranted. We examined how magnetic resonance imaging (MRI)-targeted prostate biopsy compares to traditional systematic biopsy in detecting prostate cancer among men with suspicion of prostate cancer. We found that MRI-targeted biopsy detected more high-grade cancers than systematic biopsy, and that MRI performed before biopsy can predict the risk of high-grade cancer. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Manifold absolute pressure estimation using neural network with hybrid training algorithm
Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli
2017-01-01
In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779
Classification of fMRI resting-state maps using machine learning techniques: A comparative study
NASA Astrophysics Data System (ADS)
Gallos, Ioannis; Siettos, Constantinos
2017-11-01
We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.
Surface registration technique for close-range mapping applications
NASA Astrophysics Data System (ADS)
Habib, Ayman F.; Cheng, Rita W. T.
2006-08-01
Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.
NASA Astrophysics Data System (ADS)
Sugawara, Jun; Maloney, Chris
2016-07-01
NEXCERATM cordierite ceramics, which have ultra-low thermal expansion properties, are perfect candidate materials to be used for light-weight satellite mirrors that are used for geostationary earth observation and for mirrors used in ground-based astronomical metrology. To manufacture the high precision aspheric shapes required, the deterministic aspherization and figure correction capabilities of Magnetorheological Finishing (MRF) are tested. First, a material compatibility test is performed to determine the best method for achieving the lowest surface roughness of RMS 0.8nm on plano surfaces made of NEXCERATM ceramics. Secondly, we will use MRF to perform high precision figure correction and to induce a hyperbolic shape into a conventionally polished 100mm diameter sphere.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Recursive approach to the moment-based phase unwrapping method.
Langley, Jason A; Brice, Robert G; Zhao, Qun
2010-06-01
The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.
A secure semi-field system for the study of Aedes aegypti.
Ritchie, Scott A; Johnson, Petrina H; Freeman, Anthony J; Odell, Robin G; Graham, Neal; Dejong, Paul A; Standfield, Graeme W; Sale, Richard W; O'Neill, Scott L
2011-03-22
New contained semi-field cages are being developed and used to test novel vector control strategies of dengue and malaria vectors. We herein describe a new Quarantine Insectary Level-2 (QIC-2) laboratory and field cages (James Cook University Mosquito Research Facility Semi-Field System; MRF SFS) that are being used to measure the impact of the endosymbiont Wolbachia pipientis on populations of Aedes aegypti in Cairns Australia. The MRF consists of a single QIC-2 laboratory/insectary that connects through a central corridor to two identical QIC-2 semi-field cages. The semi-field cages are constructed of two layers of 0.25 mm stainless steel wire mesh to prevent escape of mosquitoes and ingress of other insects. The cages are covered by an aluminum security mesh to prevent penetration of the cages by branches and other missiles in the advent of a tropical cyclone. Parts of the cage are protected from UV light and rainfall by 90% shade cloth and a vinyl cover. A wooden structure simulating the understory of a Queenslander-style house is also situated at one end of each cage. The remainder of the internal aspect of the cage is covered with mulch and potted plants to emulate a typical yard. An air conditioning system comprised of two external ACs that feed cooled, moistened air into the cage units. The air is released from the central ceiling beam from a long cloth tube that disperses the airflow and also prevents mosquitoes from escaping the cage via the AC system. Sensors located inside and outside the cage monitor ambient temperature and relative humidity, with AC controlled to match ambient conditions. Data loggers set in the cages and outside found a <2 °C temperature difference. Additional security features include air curtains over exit doors, sticky traps to monitor for escaping mosquitoes between layers of the mesh, a lockable vestibule leading from the connecting corridor to the cage and from inside to outside of the insectary, and screened (0.25 mm mesh) drains within the insectary and the cage. A set of standard operating procedures (SOP) has been developed to ensure that security is maintained and for enhanced surveillance for escaping mosquitoes on the JCU campus where the MRF is located. A cohort of male and female Aedes aegypti mosquitoes were released in the cage and sampled every 3-4 days to determine daily survival within the cage; log linear regression from BG-sentinel trapping collections produced an estimated daily survival of 0.93 and 0.78 for females and males, respectively. The MRF SFS allows us to test novel control strategies within a secure, contained environment. The air-conditioning system maintains conditions within the MRF cages comparable to outside ambient conditions. This cage provides a realistic transitional platform between the laboratory and the field in which to test novel control measures on quarantine level insects.
A Secure Semi-Field System for the Study of Aedes aegypti
Ritchie, Scott A.; Johnson, Petrina H.; Freeman, Anthony J.; Odell, Robin G.; Graham, Neal; DeJong, Paul A.; Standfield, Graeme W.; Sale, Richard W.; O'Neill, Scott L.
2011-01-01
Background New contained semi-field cages are being developed and used to test novel vector control strategies of dengue and malaria vectors. We herein describe a new Quarantine Insectary Level-2 (QIC-2) laboratory and field cages (James Cook University Mosquito Research Facility Semi-Field System; MRF SFS) that are being used to measure the impact of the endosymbiont Wolbachia pipientis on populations of Aedes aegypti in Cairns Australia. Methodology/Principal Findings The MRF consists of a single QIC-2 laboratory/insectary that connects through a central corridor to two identical QIC-2 semi-field cages. The semi-field cages are constructed of two layers of 0.25 mm stainless steel wire mesh to prevent escape of mosquitoes and ingress of other insects. The cages are covered by an aluminum security mesh to prevent penetration of the cages by branches and other missiles in the advent of a tropical cyclone. Parts of the cage are protected from UV light and rainfall by 90% shade cloth and a vinyl cover. A wooden structure simulating the understory of a Queenslander-style house is also situated at one end of each cage. The remainder of the internal aspect of the cage is covered with mulch and potted plants to emulate a typical yard. An air conditioning system comprised of two external ACs that feed cooled, moistened air into the cage units. The air is released from the central ceiling beam from a long cloth tube that disperses the airflow and also prevents mosquitoes from escaping the cage via the AC system. Sensors located inside and outside the cage monitor ambient temperature and relative humidity, with AC controlled to match ambient conditions. Data loggers set in the cages and outside found a <2°C temperature difference. Additional security features include air curtains over exit doors, sticky traps to monitor for escaping mosquitoes between layers of the mesh, a lockable vestibule leading from the connecting corridor to the cage and from inside to outside of the insectary, and screened (0.25 mm mesh) drains within the insectary and the cage. A set of standard operating procedures (SOP) has been developed to ensure that security is maintained and for enhanced surveillance for escaping mosquitoes on the JCU campus where the MRF is located. A cohort of male and female Aedes aegypti mosquitoes were released in the cage and sampled every 3–4 days to determine daily survival within the cage; log linear regression from BG-sentinel trapping collections produced an estimated daily survival of 0.93 and 0.78 for females and males, respectively. Conclusions/Significance The MRF SFS allows us to test novel control strategies within a secure, contained environment. The air-conditioning system maintains conditions within the MRF cages comparable to outside ambient conditions. This cage provides a realistic transitional platform between the laboratory and the field in which to test novel control measures on quarantine level insects. PMID:21445333
Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.
2000-01-01
Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.
Recycling of glass: accounting of greenhouse gases and global warming contributions.
Larsen, Anna W; Merrild, Hanna; Christensen, Thomas H
2009-11-01
Greenhouse gas (GHG) emissions related to recycling of glass waste were assessed from a waste management perspective. Focus was on the material recovery facility (MRF) where the initial sorting of glass waste takes place. The MRF delivers products like cullet and whole bottles to other industries. Two possible uses of reprocessed glass waste were considered: (i) remelting of cullet added to glass production; and (ii) re-use of whole bottles. The GHG emission accounting included indirect upstream emissions (provision of energy, fuels and auxiliaries), direct activities at the MRF and bottle-wash facility (combustion of fuels) as well as indirect downstream activities in terms of using the recovered glass waste in other industries and, thereby, avoiding emissions from conventional production. The GHG accounting was presented as aggregated global warming factors (GWFs) for the direct and indirect upstream and downstream processes, respectively. The range of GWFs was estimated to 0-70 kg CO(2)eq. tonne( -1) of glass waste for the upstream activities and the direct emissions from the waste management system. The GWF for the downstream effect showed some significant variation between the two cases. It was estimated to approximately -500 kg CO(2)-eq. tonne(- 1) of glass waste for the remelting technology and -1500 to -600 kg CO(2)-eq. tonne(-1) of glass waste for bottle re-use. Including the downstream process, large savings of GHG emissions can be attributed to the waste management system. The results showed that, in GHG emission accounting, attention should be drawn to thorough analysis of energy sources, especially electricity, and the downstream savings caused by material substitution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Kaifang; Reinhardt, Joseph M.; Christensen, Gary E.
2013-12-15
Purpose: Four-dimensional computed tomography (4DCT) can be used to make measurements of pulmonary function longitudinally. The sensitivity of such measurements to identify change depends on measurement uncertainty. Previously, intrasubject reproducibility of Jacobian-based measures of lung tissue expansion was studied in two repeat prior-RT 4DCT human acquisitions. Difference in respiratory effort such as breathing amplitude and frequency may affect longitudinal function assessment. In this study, the authors present normalization schemes that correct ventilation images for variations in respiratory effort and assess the reproducibility improvement after effort correction.Methods: Repeat 4DCT image data acquired within a short time interval from 24 patients priormore » to radiation therapy (RT) were used for this analysis. Using a tissue volume preserving deformable image registration algorithm, Jacobian ventilation maps in two scanning sessions were computed and compared on the same coordinate for reproducibility analysis. In addition to computing the ventilation maps from end expiration to end inspiration, the authors investigated the effort normalization strategies using other intermediated inspiration phases upon the principles of equivalent tidal volume (ETV) and equivalent lung volume (ELV). Scatter plots and mean square error of the repeat ventilation maps and the Jacobian ratio map were generated for four conditions: no effort correction, global normalization, ETV, and ELV. In addition, gamma pass rate was calculated from a modified gamma index evaluation between two ventilation maps, using acceptance criterions of 2 mm distance-to-agreement and 5% ventilation difference.Results: The pattern of regional pulmonary ventilation changes as lung volume changes. All effort correction strategies improved reproducibility when changes in respiratory effort were greater than 150 cc (p < 0.005 with regard to the gamma pass rate). Improvement of reproducibility was correlated with respiratory effort difference (R = 0.744 for ELV in the cohort with tidal volume difference greater than 100 cc). In general for all subjects, global normalization, ETV and ELV significantly improved reproducibility compared to no effort correction (p = 0.009, 0.002, 0.005 respectively). When tidal volume difference was small (less than 100 cc), none of the three effort correction strategies improved reproducibility significantly (p = 0.52, 0.46, 0.46 respectively). For the cohort (N = 13) with tidal volume difference greater than 100 cc, the average gamma pass rate improves from 57.3% before correction to 66.3% after global normalization, and 76.3% after ELV. ELV was found to be significantly better than global normalization (p = 0.04 for all subjects, and p = 0.003 for the cohort with tidal volume difference greater than 100 cc).Conclusions: All effort correction strategies improve the reproducibility of the authors' pulmonary ventilation measures, and the improvement of reproducibility is highly correlated with the changes in respiratory effort. ELV gives better results as effort difference increase, followed by ETV, then global. However, based on the spatial and temporal heterogeneity in the lung expansion rate, a single scaling factor (e.g., global normalization) appears to be less accurate to correct the ventilation map when changes in respiratory effort are large.« less
1974-11-01
Challenge to Operations Research" 263 Mr. R. H. Adams Mr..F. P. Paca Mr. A. T. Sylvester "A Combat Rates Logistics Analysis...Staff; if we average a tour of duty in the Pentagon as three years, the Army has had eight successive generations of planners and operators in the...doctrine, originally enunciated for Greece and Turkey, brought the Army full tilt into the Military Assistance Program ( MAP ) as this contributed to
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
Spherical Primary Optical Telescope (SPOT) Segment Fabrication
2010-06-07
of Pyrex. One mirror (segment) was figured at GSFC and final figured at QED using Magnetorheological Finishing . Two other segments are in process...point) have been cast • Segment 1 was figured at GSFC completed at QED using magnetorheological finishing (MRF) • New GSFC figuring facility brought on
NASA Astrophysics Data System (ADS)
Sugawara, Jun; Kamiya, Tomohiro; Mikashima, Bumpei
2017-09-01
Ultra-low thermal expansion ceramics NEXCERATM is regarded as one of potential candidate materials crucial for ultralightweight and thermally-stable optical mirrors for space telescopes which are used in future optical missions satisfying extremely high observation specifications. To realize the high precision NEXCERA mirrors for space telescopes, it is important to develop a deterministic aspheric shape polishing and a precise figure correction polishing method for the NEXCERA. Magnetorheological finishing (MRF) was tested to the NEXCERA aspheric mirror from best fit sphere shape, because the MRF technology is regarded as the best suited process for a precise figure correction of the ultralightweight mirror with thin sheet due to its advantage of low normal force polishing. As using the best combination of material and MR fluid, the MRF was performed high precision figure correction and to induce a hyperbolic shape from a conventionally polished 100mm diameter sphere, and achieved the sufficient high figure accuracy and the high quality surface roughness. In order to apply the NEXCERA to a large scale space mirror, for the next step, a middle size solid mirror, 250 mm diameter concave parabola, was machined. It was roughly ground in the parabolic shape, and was lapped and polished by a computer-controlled polishing machine using sub-aperture polishing tools. It resulted in the smooth surface of 0.6 nm RMS and the figure accuracy of λ/4, being enough as pre-MRF surface. A further study of the NEXCERA space mirrors should be proceeded as a figure correction using the MRF to lightweight mirror with thin mirror sheet.
Jiang, Yun; Ma, Dan; Keenan, Kathryn E; Stupic, Karl F; Gulani, Vikas; Griswold, Mark A
2017-10-01
The purpose of this study was to evaluate accuracy and repeatability of T 1 and T 2 estimates of a MR fingerprinting (MRF) method using the ISMRM/NIST MRI system phantom. The ISMRM/NIST MRI system phantom contains multiple compartments with standardized T 1 , T 2 , and proton density values. Conventional inversion-recovery spin echo and spin echo methods were used to characterize the T 1 and T 2 values in the phantom. The phantom was scanned using the MRF-FISP method over 34 consecutive days. The mean T 1 and T 2 values were compared with the values from the spin echo methods. The repeatability was characterized as the coefficient of variation of the measurements over 34 days. T 1 and T 2 values from MRF-FISP over 34 days showed a strong linear correlation with the measurements from the spin echo methods (R 2 = 0.999 for T 1 ; R 2 = 0.996 for T 2 ). The MRF estimates over the wide ranges of T 1 and T 2 values have less than 5% variation, except for the shortest T 2 relaxation times where the method still maintains less than 8% variation. MRF measurements of T 1 and T 2 are highly repeatable over time and across wide ranges of T 1 and T 2 values. Magn Reson Med 78:1452-1457, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Randomized Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.
Immunotherapy With Magentorheologic Fluids
2011-08-01
anti-tumor effects are weakened by removal of the tumor antigen pool (i.e. surgery) or use of cytoreductive and immunosuppressive therapies (i.e...particles were injected as magneto -rheological fluid (MRF) into an orthotopic primary breast cancer and followed by application of a magnetic field to...SUBJECT TERMS MRF: Magneto -rehological fluid iron particles, IT: immunotherapy, necrotic death, DCs: dendritic cells, cytokines, chemokines
MRF actuators with reduced no-load losses
NASA Astrophysics Data System (ADS)
Güth, Dirk; Maas, Jürgen
2012-04-01
Magnetorheological fluids (MRF) are smart fluids with the particular characteristics of changing their apparent viscosity significantly under the influence of a magnetic field. This property allows the design of mechanical devices for torque transmission, such as brakes and clutches, with a continuously adjustable and smooth torque generation. A challenge that is opposed to a commercial use, are durable no-load losses, because a complete torque-free separation due to the permanent liquid intervention is inherently not yet possible. In this paper, the necessity of reducing these durable no-load losses will be shown by measurements performed with a MRF brake for high rotational speeds of 6000min-1 in a first step. The detrimental high viscous torque motivates the introduction of a novel concept that allows a controlled movement of the MR fluid from an active shear gap into an inactive shear gap and thus an almost separation of the fluid engaging surfaces. Simulation and measurement results show that the viscous induced drag torque can be reduced significantly. Based on this new approach, it is possible to realize MRF actuators for an energy-efficient use in the drive technology or power train, which avoid this inherent disadvantage and extend additionally the durability of the entire component.
NASA Astrophysics Data System (ADS)
Ji, Fang; Xu, Min; Wang, Chao; Li, Xiaoyuan; Gao, Wei; Zhang, Yunfei; Wang, Baorui; Tang, Guangping; Yue, Xiaobin
2016-02-01
The cubic Fe3O4 nanoparticles with sharp horns that display the size distribution between 100 and 200 nm are utilized to substitute the magnetic sensitive medium (carbonyl iron powders, CIPs) and abrasives (CeO2/diamond) simultaneously which are widely employed in conventional magnetorheological finishing fluid. The removal rate of this novel fluid is extremely low compared with the value of conventional one even though the spot of the former is much bigger. This surprising phenomenon is generated due to the small size and low saturation magnetization ( M s) of Fe3O4 and corresponding weak shear stress under external magnetic field according to material removal rate model of magnetorheological finishing (MRF). Different from conventional D-shaped finishing spot, the low M s also results in a shuttle-like spot because the magnetic controllability is weak and particles in the fringe of spot are loose. The surface texture as well as figure accuracy and PSD1 (power spectrum density) of potassium dihydrogen phosphate (KDP) is greatly improved after MRF, which clearly prove the feasibility of substituting CIP and abrasive with Fe3O4 in our novel MRF design.
Ji, Fang; Xu, Min; Wang, Chao; Li, Xiaoyuan; Gao, Wei; Zhang, Yunfei; Wang, Baorui; Tang, Guangping; Yue, Xiaobin
2016-12-01
The cubic Fe3O4 nanoparticles with sharp horns that display the size distribution between 100 and 200 nm are utilized to substitute the magnetic sensitive medium (carbonyl iron powders, CIPs) and abrasives (CeO2/diamond) simultaneously which are widely employed in conventional magnetorheological finishing fluid. The removal rate of this novel fluid is extremely low compared with the value of conventional one even though the spot of the former is much bigger. This surprising phenomenon is generated due to the small size and low saturation magnetization (M s) of Fe3O4 and corresponding weak shear stress under external magnetic field according to material removal rate model of magnetorheological finishing (MRF). Different from conventional D-shaped finishing spot, the low M s also results in a shuttle-like spot because the magnetic controllability is weak and particles in the fringe of spot are loose. The surface texture as well as figure accuracy and PSD1 (power spectrum density) of potassium dihydrogen phosphate (KDP) is greatly improved after MRF, which clearly prove the feasibility of substituting CIP and abrasive with Fe3O4 in our novel MRF design.
Shorey, A B; Jacobs, S D; Kordonski, W I; Gans, R F
2001-01-01
Recent advances in the study of the magnetorheological finishing (MRF) have allowed for the characterization of the dynamic yield stress of the magnetorheological (MR) fluid, as well as the nanohardness (H(nano)) of the carbonyl iron (CI) used in MRF. Knowledge of these properties has allowed for a more complete study of the mechanisms of material removal in MRF. Material removal experiments show that the nanohardness of CI is important in MRF with nonaqueous MR fluids with no nonmagnetic abrasives, but is relatively unimportant in aqueous MR fluids or when nonmagnetic abrasives are present. The hydrated layer created by the chemical effects of water is shown to change the way material is removed by hard CI as the MR fluid transitions from a nonaqueous MR fluid to an aqueous MR fluid. Drag force measurements and atomic force microscope scans demonstrate that, when added to a MR fluid, nonmagnetic abrasives (cerium oxide, aluminum oxide, and diamond) are driven toward the workpiece surface because of the gradient in the magnetic field and hence become responsible for material removal. Removal rates increase with the addition of these polishing abrasives. The relative increase depends on the amount and type of abrasive used.
Chen, Shaoshan; Li, Shengyi; Hu, Hao; Li, Qi; Tie, Guipeng
2014-11-01
A new nonaqueous and abrasive-free magnetorheological finishing (MRF) method is adopted for processing potassium dihydrogen phosphate (KDP) crystal due to its low hardness, high brittleness, temperature sensitivity, and water solubility. This paper researches the influence of structural characteristics on the surface roughness of MRF-finished KDP crystal. The material removal by dissolution is uniform layer by layer when the polishing parameters are stable. The angle between the direction of the polishing wheel's linear velocity and the initial turning lines will affect the surface roughness. If the direction is perpendicular to the initial turning lines, the polishing can remove the lines. If the direction is parallel to the initial turning lines, the polishing can achieve better surface roughness. The structural characteristic of KDP crystal is related to its internal chemical bonds due to its anisotropy. During the MRF finishing process, surface roughness will be improved if the structural characteristics of the KDP crystal are the same on both sides of the wheel. The processing results of (001) plane crystal show we can get the best surface roughness (RMS of 0.809 nm) if the directions of cutting and MRF polishing are along the (110) direction.
Software for Generating Strip Maps from SAR Data
NASA Technical Reports Server (NTRS)
Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto
2004-01-01
Jurassicprok is a computer program that generates strip-map digital elevation models and other data products from raw data acquired by an airborne synthetic-aperture radar (SAR) system. This software can process data from a variety of airborne SAR systems but is designed especially for the GeoSAR system, which is a dual-frequency (P- and X-band), single-pass interferometric SAR system for measuring elevation both at the bare ground surface and top of the vegetation canopy. Jurassicprok is a modified version of software developed previously for airborne-interferometric- SAR applications. The modifications were made to accommodate P-band interferometric processing, remove approximations that are not generally valid, and reduce processor-induced mapping errors to the centimeter level. Major additions and other improvements over the prior software include the following: a) A new, highly efficient multi-stage-modified wave-domain processing algorithm for accurately motion compensating ultra-wideband data; b) Adaptive regridding algorithms based on estimated noise and actual measured topography to reduce noise while maintaining spatial resolution; c) Exact expressions for height determination from interferogram data; d) Fully calibrated volumetric correlation data based on rigorous removal of geometric and signal-to-noise decorrelation terms; e) Strip range-Doppler image output in user-specified Doppler coordinates; f) An improved phase-unwrapping and absolute-phase-determination algorithm; g) A more flexible user interface with many additional processing options; h) Increased interferogram filtering options; and i) Ability to use disk space instead of random- access memory for some processing steps.
Markov random field based automatic image alignment for electron tomography.
Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark
2008-03-01
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.
Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms
NASA Astrophysics Data System (ADS)
Samanta, A.; Todd, L. A.
A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.
Chung, Heeteak; Li, Jonathan; Samant, Sanjiv
2011-04-08
Two-dimensional array dosimeters are commonly used to perform pretreatment quality assurance procedures, which makes them highly desirable for measuring transit fluences for in vivo dose reconstruction. The purpose of this study was to determine if an in vivo dose reconstruction via transit dosimetry using a 2D array dosimeter was possible. To test the accuracy of measuring transit dose distribution using a 2D array dosimeter, we evaluated it against the measurements made using ionization chamber and radiochromic film (RCF) profiles for various air gap distances (distance from the exit side of the solid water slabs to the detector distance; 0 cm, 30 cm, 40 cm, 50 cm, and 60 cm) and solid water slab thicknesses (10 cm and 20 cm). The backprojection dose reconstruction algorithm was described and evaluated. The agreement between the ionization chamber and RCF profiles for the transit dose distribution measurements ranged from -0.2% ~ 4.0% (average 1.79%). Using the backprojection dose reconstruction algorithm, we found that, of the six conformal fields, four had a 100% gamma index passing rate (3%/3 mm gamma index criteria), and two had gamma index passing rates of 99.4% and 99.6%. Of the five IMRT fields, three had a 100% gamma index passing rate, and two had gamma index passing rates of 99.6% and 98.8%. It was found that a 2D array dosimeter could be used for backprojection dose reconstruction for in vivo dosimetry.
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi
2011-08-01
Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.
3D Markov Process for Traffic Flow Prediction in Real-Time.
Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi
2016-01-25
Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further.
3D Markov Process for Traffic Flow Prediction in Real-Time
Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi
2016-01-01
Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025
Segmentation algorithm on smartphone dual camera: application to plant organs in the wild
NASA Astrophysics Data System (ADS)
Bertrand, Sarah; Cerutti, Guillaume; Tougne, Laure
2018-04-01
In order to identify the species of a tree, the different organs that are the leaves, the bark, the flowers and the fruits, are inspected by botanists. So as to develop an algorithm that identifies automatically the species, we need to extract these objects of interest from their complex natural environment. In this article, we focus on the segmentation of flowers and fruits and we present a new method of segmentation based on an active contour algorithm using two probability maps. The first map is constructed via the dual camera that we can find on the back of the latest smartphones. The second map is made with the help of a multilayer perceptron (MLP). The combination of these two maps to drive the evolution of the object contour allows an efficient segmentation of the organ from a natural background.
Targeting accuracy of single-isocenter intensity-modulated radiosurgery for multiple lesions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvo-Ortega, J.F., E-mail: jfcdrr@yahoo.es; Pozo, M.; Moragues, S.
To investigate the targeting accuracy of intensity-modulated SRS (IMRS) plans designed to simultaneously treat multiple brain metastases with a single isocenter. A home-made acrylic phantom able to support a film (EBT3) in its coronal plane was used. The phantom was CT scanned and three coplanar small targets (a central and two peripheral) were outlined in the Eclipse system. Peripheral targets were 6 cm apart from the central one. A reference IMRS plan was designed to simultaneously treat the three targets, but only a single isocenter located at the center of the central target was used. After positioning the phantom onmore » the linac using the room lasers, a CBCT scan was acquired and the reference plan were mapped on it, by placing the planned isocenter at the intersection of the landmarks used in the film showing the linac isocenter. The mapped plan was then recalculated and delivered. The film dose distribution was derived using a cloud computing application ( (www.radiochromic.com)) that uses a triple-channel dosimetry algorithm. Comparison of dose distributions using the gamma index (5%/1 mm) were performed over a 5 × 5 cm{sup 2} region centered over each target. 2D shifts required to get the best gamma passing rates on the peripheral target regions were compared with the reported ones for the central target. The experiment was repeated ten times in different sessions. Average 2D shifts required to achieve optimal gamma passing rates (99%, 97%, 99%) were 0.7 mm (SD: 0.3 mm), 0.8 mm (SD: 0.4 mm) and 0.8 mm (SD: 0.3 mm), for the central and the two peripheral targets, respectively. No statistical differences (p > 0.05) were found for targeting accuracy between the central and the two peripheral targets. The study revealed a targeting accuracy within 1 mm for off-isocenter targets within 6 cm of the linac isocenter, when a single-isocenter IMRS plan is designed.« less
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunz, Josiah; Snopok, Pavel; Berz, Martin
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochasticmore » nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.« less
Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs
NASA Technical Reports Server (NTRS)
Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen
2015-01-01
An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.
Morrow, T J; Casey, K L
1983-01-01
The responses of 302 neurons in the medial medullary reticular formation (MRF) to a variety of noxious and innocuous somatic stimuli were studied in anesthetized and awake rats. In addition, the effects of analgesic electrical stimulation in the mesencephalon (MES) on unit responses were examined. Tail shock was the most effective stimulus, exciting more than 80% of all units recorded. This stimulus was considered separately during data analysis, since it could not be classified as noxious or innocuous. Noxious somatic stimuli (including pinch, firm pressure, pin prick, and radiant heating of the tail above 45 degrees C were especially effective in eliciting discharge in a significant fraction of all cells in both awake (123/205) and anesthetized (45/97) animals. Nociceptive neurons could be classified as nociceptive specific (NS) or wide dynamic range (WDR) depending on their responses to all somatic stimuli tested. Nociceptive neurons showed no preferential anatomical distribution. Most neurons, including those responsive to noxious inputs, exhibited large, often bilateral receptive fields which frequently covered the tail, one or more limbs, and extensive areas of the body or head. Electrical stimulation within or adjacent to the mesencephalic periaqueductal gray matter depressed the spontaneous and evoked discharge of MRF neurons in both acute and chronic preparations. This inhibition showed a significant preference (p less than 0.001, chi-square statistic) for units that were excited by somatic and especially noxious stimuli. No units were facilitated by MES stimulation. In the awake rat, unit suppression closely followed the time course and level of MES-induced analgesia. Excitability data from the acute experiments suggest that this response inhibition may be the result of a direct action on MRF neurons. Anesthesia severely depressed the spontaneous discharge of MRF neurons as well as the activity evoked by innocuous somatic stimulation. Our data suggest that analgesia produced by MES stimulation is at least in part due to the depression of MRF unit activity, and support the hypothesis that MRF neurons play a critical role in the mediation of behavioral responses to noxious stimuli.
Robust PRNG based on homogeneously distributed chaotic dynamics
NASA Astrophysics Data System (ADS)
Garasym, Oleg; Lozi, René; Taralova, Ina
2016-02-01
This paper is devoted to the design of new chaotic Pseudo Random Number Generator (CPRNG). Exploring several topologies of network of 1-D coupled chaotic mapping, we focus first on two dimensional networks. Two topologically coupled maps are studied: TTL rc non-alternate, and TTL SC alternate. The primary idea of the novel maps has been based on an original coupling of the tent and logistic maps to achieve excellent random properties and homogeneous /uniform/ density in the phase plane, thus guaranteeing maximum security when used for chaos base cryptography. In this aim two new nonlinear CPRNG: MTTL 2 sc and NTTL 2 are proposed. The maps successfully passed numerous statistical, graphical and numerical tests, due to proposed ring coupling and injection mechanisms.
Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham
2015-01-01
Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943
Apriori Versions Based on MapReduce for Mining Frequent Patterns on Big Data.
Luna, Jose Maria; Padillo, Francisco; Pechenizkiy, Mykola; Ventura, Sebastian
2017-09-27
Pattern mining is one of the most important tasks to extract meaningful and useful information from raw data. This task aims to extract item-sets that represent any type of homogeneity and regularity in data. Although many efficient algorithms have been developed in this regard, the growing interest in data has caused the performance of existing pattern mining techniques to be dropped. The goal of this paper is to propose new efficient pattern mining algorithms to work in big data. To this aim, a series of algorithms based on the MapReduce framework and the Hadoop open-source implementation have been proposed. The proposed algorithms can be divided into three main groups. First, two algorithms [Apriori MapReduce (AprioriMR) and iterative AprioriMR] with no pruning strategy are proposed, which extract any existing item-set in data. Second, two algorithms (space pruning AprioriMR and top AprioriMR) that prune the search space by means of the well-known anti-monotone property are proposed. Finally, a last algorithm (maximal AprioriMR) is also proposed for mining condensed representations of frequent patterns. To test the performance of the proposed algorithms, a varied collection of big data datasets have been considered, comprising up to 3 · 10#x00B9;⁸ transactions and more than 5 million of distinct single-items. The experimental stage includes comparisons against highly efficient and well-known pattern mining algorithms. Results reveal the interest of applying MapReduce versions when complex problems are considered, and also the unsuitability of this paradigm when dealing with small data.
NASA Astrophysics Data System (ADS)
Menapace, Joseph A.
2010-11-01
Over the last eight years we have been developing advanced MRF tools and techniques to manufacture meter-scale optics for use in Megajoule class laser systems. These systems call for optics having unique characteristics that can complicate their fabrication using conventional polishing methods. First, exposure to the high-power nanosecond and sub-nanosecond pulsed laser environment in the infrared (>27 J/cm2 at 1053 nm), visible (>18 J/cm2 at 527 nm), and ultraviolet (>10 J/cm2 at 351 nm) demands ultra-precise control of optical figure and finish to avoid intensity modulation and scatter that can result in damage to the optics chain or system hardware. Second, the optics must be super-polished and virtually free of surface and subsurface flaws that can limit optic lifetime through laser-induced damage initiation and growth at the flaw sites, particularly at 351 nm. Lastly, ultra-precise optics for beam conditioning are required to control laser beam quality. These optics contain customized surface topographical structures that cannot be made using traditional fabrication processes. In this review, we will present the development and implementation of large-aperture MRF tools and techniques specifically designed to meet the demanding optical performance challenges required in large aperture high-power laser systems. In particular, we will discuss the advances made by using MRF technology to expose and remove surface and subsurface flaws in optics during final polishing to yield optics with improve laser damage resistance, the novel application of MRF deterministic polishing to imprint complex topographical information and wavefront correction patterns onto optical surfaces, and our efforts to advance the technology to manufacture largeaperture damage resistant optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menapace, J A
2010-10-27
Over the last eight years we have been developing advanced MRF tools and techniques to manufacture meter-scale optics for use in Megajoule class laser systems. These systems call for optics having unique characteristics that can complicate their fabrication using conventional polishing methods. First, exposure to the high-power nanosecond and sub-nanosecond pulsed laser environment in the infrared (>27 J/cm{sup 2} at 1053 nm), visible (>18 J/cm{sup 2} at 527 nm), and ultraviolet (>10 J/cm{sup 2} at 351 nm) demands ultra-precise control of optical figure and finish to avoid intensity modulation and scatter that can result in damage to the optics chainmore » or system hardware. Second, the optics must be super-polished and virtually free of surface and subsurface flaws that can limit optic lifetime through laser-induced damage initiation and growth at the flaw sites, particularly at 351 nm. Lastly, ultra-precise optics for beam conditioning are required to control laser beam quality. These optics contain customized surface topographical structures that cannot be made using traditional fabrication processes. In this review, we will present the development and implementation of large-aperture MRF tools and techniques specifically designed to meet the demanding optical performance challenges required in large-aperture high-power laser systems. In particular, we will discuss the advances made by using MRF technology to expose and remove surface and subsurface flaws in optics during final polishing to yield optics with improve laser damage resistance, the novel application of MRF deterministic polishing to imprint complex topographical information and wavefront correction patterns onto optical surfaces, and our efforts to advance the technology to manufacture large-aperture damage resistant optics.« less
Magnetic Resonance Fingerprinting of Adult Brain Tumors: Initial Experience
Badve, Chaitra; Yu, Alice; Dastmalchian, Sara; Rogers, Matthew; Ma, Dan; Jiang, Yun; Margevicius, Seunghee; Pahwa, Shivani; Lu, Ziang; Schluchter, Mark; Sunshine, Jeffrey; Griswold, Mark; Sloan, Andrew; Gulani, Vikas
2016-01-01
Background Magnetic resonance fingerprinting (MRF) allows rapid simultaneous quantification of T1 and T2 relaxation times. This study assesses the utility of MRF in differentiating between common types of adult intra-axial brain tumors. Methods MRF acquisition was performed in 31 patients with untreated intra-axial brain tumors: 17 glioblastomas, 6 WHO grade II lower-grade gliomas and 8 metastases. T1, T2 of the solid tumor (ST), immediate peritumoral white matter (PW), and contralateral white matter (CW) were summarized within each region of interest. Statistical comparisons on mean, standard deviation, skewness and kurtosis were performed using univariate Wilcoxon rank sum test across various tumor types. Bonferroni correction was used to correct for multiple comparisons testing. Multivariable logistic regression analysis was performed for discrimination between glioblastomas and metastases and area under the receiver operator curve (AUC) was calculated. Results Mean T2 values could differentiate solid tumor regions of lower-grade gliomas from metastases (mean±sd: 172±53ms and 105±27ms respectively, p =0.004, significant after Bonferroni correction). Mean T1 of PW surrounding lower-grade gliomas differed from PW around glioblastomas (mean±sd: 1066±218ms and 1578±331ms respectively, p=0.004, significant after Bonferroni correction). Logistic regression analysis revealed that mean T2 of ST offered best separation between glioblastomas and metastases with AUC of 0.86 (95% CI 0.69–1.00, p<0.0001). Conclusion MRF allows rapid simultaneous T1, T2 measurement in brain tumors and surrounding tissues. MRF based relaxometry can identify quantitative differences between solid-tumor regions of lower grade gliomas and metastases and between peritumoral regions of glioblastomas and lower grade gliomas. PMID:28034994
Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies
NASA Astrophysics Data System (ADS)
Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu
2015-09-01
Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.
Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application.
Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola
2017-06-06
Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information's relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection.
Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application
Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola
2017-01-01
Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information’s relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection. PMID:28587299
Kang, Hai-Yong; Schoenung, Julie M
2006-03-01
The objectives of this study are to identify the various techniques used for treating electronic waste (e-waste) at material recovery facilities (MRFs) in the state of California and to investigate the costs and revenue drivers for these techniques. The economics of a representative e-waste MRF are evaluated by using technical cost modeling (TCM). MRFs are a critical element in the infrastructure being developed within the e-waste recycling industry. At an MRF, collected e-waste can become marketable output products including resalable systems/components and recyclable materials such as plastics, metals, and glass. TCM has two main constituents, inputs and outputs. Inputs are process-related and economic variables, which are directly specified in each model. Inputs can be divided into two parts: inputs for cost estimation and for revenue estimation. Outputs are the results of modeling and consist of costs and revenues, distributed by unit operation, cost element, and revenue source. The results of the present analysis indicate that the largest cost driver for the operation of the defined California e-waste MRF is the materials cost (37% of total cost), which includes the cost to outsource the recycling of the cathode ray tubes (CRTs) (dollar 0.33/kg); the second largest cost driver is labor cost (28% of total cost without accounting for overhead). The other cost drivers are transportation, building, and equipment costs. The most costly unit operation is cathode ray tube glass recycling, and the next are sorting, collecting, and dismantling. The largest revenue source is the fee charged to the customer; metal recovery is the second largest revenue source.
NASA Technical Reports Server (NTRS)
Kweon, In SO; Hebert, Martial; Kanade, Takeo
1989-01-01
A three-dimensional perception system for building a geometrical description of rugged terrain environments from range image data is presented with reference to the exploration of the rugged terrain of Mars. An intermediate representation consisting of an elevation map that includes an explicit representation of uncertainty and labeling of the occluded regions is proposed. The locus method used to convert range image to an elevation map is introduced, along with an uncertainty model based on this algorithm. Both the elevation map and the locus method are the basis of a terrain matching algorithm which does not assume any correspondences between range images. The two-stage algorithm consists of a feature-based matching algorithm to compute an initial transform and an iconic terrain matching algorithm to merge multiple range images into a uniform representation. Terrain modeling results on real range images of rugged terrain are presented. The algorithms considered are a fundamental part of the perception system for the Ambler, a legged locomotor.
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafrir, S.N.; Lambropoulos, J.C.; Jacobs, S.D.
2007-03-23
Surface features of tungsten carbide composites processed by bound abrasive deterministic microgrinding and magnetorheological finishing (MRF) were studied for five WC-Ni composites, including one binderless material. All the materials studied were nonmagnetic with different microstructures and mechanical properties. White-light interferometry, scanning electron microscopy, and atomic force microscopy were used to characterize the surfaces after various grinding steps, surface etching, and MRF spot-taking.
Spherical primary optical telescope (SPOT) segments
NASA Astrophysics Data System (ADS)
Hall, Christopher; Hagopian, John; DeMarco, Michael
2012-09-01
The spherical primary optical telescope (SPOT) project is an internal research and development program at NASA Goddard Space Flight Center. The goals of the program are to develop a robust and cost effective way to manufacture spherical mirror segments and demonstrate a new wavefront sensing approach for continuous phasing across the segmented primary. This paper focuses on the fabrication of the mirror segments. Significant cost savings were achieved through the design, since it allowed the mirror segments to be cast rather than machined from a glass blank. Casting was followed by conventional figuring at Goddard Space Flight Center. After polishing, the mirror segments were mounted to their composite assemblies. QED Technologies used magnetorheological finishing (MRF®) for the final figuring. The MRF process polished the mirrors while they were mounted to their composite assemblies. Each assembly included several magnetic invar plugs that extended to within an inch of the face of the mirror. As part of this project, the interaction between the MRF magnetic field and invar plugs was evaluated. By properly selecting the polishing conditions, MRF was able to significantly improve the figure of the mounted segments. The final MRF figuring demonstrates that mirrors, in the mounted configuration, can be polished and tested to specification. There are significant process capability advantes due to polishing and testing the optics in their final, end-use assembled state.
Normal force and drag force in magnetorheological finishing
NASA Astrophysics Data System (ADS)
Miao, Chunlin; Shafrir, Shai N.; Lambropoulos, John C.; Jacobs, Stephen D.
2009-08-01
The material removal in magnetorheological finishing (MRF) is known to be controlled by shear stress, λ, which equals drag force, Fd, divided by spot area, As. However, it is unclear how the normal force, Fn, affects the material removal in MRF and how the measured ratio of drag force to normal force Fd/Fn, equivalent to coefficient of friction, is related to material removal. This work studies, for the first time for MRF, the normal force and the measured ratio Fd/Fn as a function of material mechanical properties. Experimental data were obtained by taking spots on a variety of materials including optical glasses and hard ceramics with a spot-taking machine (STM). Drag force and normal force were measured with a dual load cell. Drag force decreases linearly with increasing material hardness. In contrast, normal force increases with hardness for glasses, saturating at high hardness values for ceramics. Volumetric removal rate decreases with normal force across all materials. The measured ratio Fd/Fn shows a strong negative linear correlation with material hardness. Hard materials exhibit a low "coefficient of friction". The volumetric removal rate increases with the measured ratio Fd/Fn which is also correlated with shear stress, indicating that the measured ratio Fd/Fn is a useful measure of material removal in MRF.
Novel MRF fluid for ultra-low roughness optical surfaces
NASA Astrophysics Data System (ADS)
Dumas, Paul; McFee, Charles
2014-08-01
Over the past few years there have been an increasing number of applications calling for ultra-low roughness (ULR) surfaces. A critical demand has been driven by EUV optics, EUV photomasks, X-Ray, and high energy laser applications. Achieving ULR results on complex shapes like aspheres and X-Ray mirrors is extremely challenging with conventional polishing techniques. To achieve both tight figure and roughness specifications, substrates typically undergo iterative global and local polishing processes. Typically the local polishing process corrects the figure or flatness but cannot achieve the required surface roughness, whereas the global polishing process produces the required roughness but degrades the figure. Magnetorheological Finishing (MRF) is a local polishing technique based on a magnetically-sensitive fluid that removes material through a shearing mechanism with minimal normal load, thus removing sub-surface damage. The lowest surface roughness produced by current MRF is close to 3 Å RMS. A new ULR MR fluid uses a nano-based cerium as the abrasive in a proprietary aqueous solution, the combination of which reliably produces under 1.5Å RMS roughness on Fused Silica as measured by atomic force microscopy. In addition to the highly convergent figure correction achieved with MRF, we show results of our novel MR fluid achieving <1.5Å RMS roughness on fused silica and other materials.
Normal Force and Drag Force in Magnetorheological Finishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, C.; Shafrir, S.N.; Lambropoulos, J.C.
2010-01-13
The material removal in magnetorheological finishing (MRF) is known to be controlled by shear stress, tau, which equals drag force, Fd, divided by spot area, As. However, it is unclear how the normal force, Fn, affects the material removal in MRF and how the measured ratio of drag force to normal force Fd/Fn, equivalent to coefficient of friction, is related to material removal. This work studies, for the first time for MRF, the normal force and the measured ratio Fd/Fn as a function of material mechanical properties. Experimental data were obtained by taking spots on a variety of materials includingmore » optical glasses and hard ceramics with a spot-taking machine (STM). Drag force and normal force were measured with a dual load cell. Drag force decreases linearly with increasing material hardness. In contrast, normal force increases with hardness for glasses, saturating at high hardness values for ceramics. Volumetric removal rate decreases with normal force across all materials. The measured ratio Fd/Fn shows a strong negative linear correlation with material hardness. Hard materials exhibit a low “coefficient of friction”. The volumetric removal rate increases with the measured ratio Fd/Fn which is also correlated with shear stress, indicating that the measured ratio Fd/Fn is a useful measure of material removal in MRF.« less
Ghosh, Adarsh; Singh, Tulika; Singla, Veenu; Bagga, Rashmi; Khandelwal, Niranjan
2017-12-01
Apparent diffusion coefficient (ADC) maps are usually generated by builtin software provided by the MRI scanner vendors; however, various open-source postprocessing software packages are available for image manipulation and parametric map generation. The purpose of this study is to establish the reproducibility of absolute ADC values obtained using different postprocessing software programs. DW images with three b values were obtained with a 1.5-T MRI scanner, and the trace images were obtained. ADC maps were automatically generated by the in-line software provided by the vendor during image generation and were also separately generated on postprocessing software. These ADC maps were compared on the basis of ROIs using paired t test, Bland-Altman plot, mountain plot, and Passing-Bablok regression plot. There was a statistically significant difference in the mean ADC values obtained from the different postprocessing software programs when the same baseline trace DW images were used for the ADC map generation. For using ADC values as a quantitative cutoff for histologic characterization of tissues, standardization of the postprocessing algorithm is essential across processing software packages, especially in view of the implementation of vendor-neutral archiving.
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.; Gallagher, Andrea
1992-01-01
The sedimentary sections exposed in the Canyonlands and Arches National Parks region of Utah (generally referred to as 'Canyonlands') consist of sandstones, shales, limestones, and conglomerates. Reflectance spectra of weathered surfaces of rocks from these areas show two components: (1) variations in spectrally detectable mineralogy, and (2) variations in the relative ratios of the absorption bands between minerals. Both types of information can be used together to map each major lithology and the Clark spectral features mapping algorithm is applied to do the job.
Autonomous Navigation by a Mobile Robot
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand
2005-01-01
ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.
Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio
2016-02-04
Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle's location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent.
Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio
2016-01-01
Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle’s location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent. PMID:26861320
NASA Astrophysics Data System (ADS)
He, Yaoyao; Yang, Shanlin; Xu, Qifa
2013-07-01
In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.
A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.
Langley, Jason; Zhao, Qun
2009-09-07
The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.
Fringe pattern demodulation with a two-frame digital phase-locked loop algorithm.
Gdeisat, Munther A; Burton, David R; Lalor, Michael J
2002-09-10
A novel technique called a two-frame digital phase-locked loop for fringe pattern demodulation is presented. In this scheme, two fringe patterns with different spatial carrier frequencies are grabbed for an object. A digital phase-locked loop algorithm tracks and demodulates the phase difference between both fringe patterns by employing the wrapped phase components of one of the fringe patterns as a reference to demodulate the second fringe pattern. The desired phase information can be extracted from the demodulated phase difference. We tested the algorithm experimentally using real fringe patterns. The technique is shown to be suitable for noncontact measurement of objects with rapid surface variations, and it outperforms the Fourier fringe analysis technique in this aspect. Phase maps produced withthis algorithm are noisy in comparison with phase maps generated with the Fourier fringe analysis technique.
Using the global positioning system to map disturbance patterns of forest harvesting machinery
T.P. McDonald; E.A. Carter; S.E. Taylor
2002-01-01
Abstract: A method was presented to transform sampled machine positional data obtained from a global positioning system (GPS) receiver into a two-dimensional raster map of number of passes as a function of location. The effect of three sources of error in the transformation process were investigated: path sampling rate (receiver sampling frequency);...
Quantum cluster variational method and message passing algorithms revisited
NASA Astrophysics Data System (ADS)
Domínguez, E.; Mulet, Roberto
2018-02-01
We present a general framework to study quantum disordered systems in the context of the Kikuchi's cluster variational method (CVM). The method relies in the solution of message passing-like equations for single instances or in the iterative solution of complex population dynamic algorithms for an average case scenario. We first show how a standard application of the Kikuchi's CVM can be easily translated to message passing equations for specific instances of the disordered system. We then present an "ad hoc" extension of these equations to a population dynamic algorithm representing an average case scenario. At the Bethe level, these equations are equivalent to the dynamic population equations that can be derived from a proper cavity ansatz. However, at the plaquette approximation, the interpretation is more subtle and we discuss it taking also into account previous results in classical disordered models. Moreover, we develop a formalism to properly deal with the average case scenario using a replica-symmetric ansatz within this CVM for quantum disordered systems. Finally, we present and discuss numerical solutions of the different approximations for the quantum transverse Ising model and the quantum random field Ising model in two-dimensional lattices.
Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.
Lu, Yisu; Jiang, Jun; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064
Performance Analysis and Portability of the PLUM Load Balancing System
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1998-01-01
The ability to dynamically adapt an unstructured mesh is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive numerical computations in a message-passing environment. PLUM requires that all data be globally redistributed after each mesh adaption to achieve load balance. We present an algorithm for minimizing this remapping overhead by guaranteeing an optimal processor reassignment. We also show that the data redistribution cost can be significantly reduced by applying our heuristic processor reassignment algorithm to the default mapping of the parallel partitioner. Portability is examined by comparing performance on a SP2, an Origin2000, and a T3E. Results show that PLUM can be successfully ported to different platforms without any code modifications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley
2015-01-15
Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less
Growing a hypercubical output space in a self-organizing feature map.
Bauer, H U; Villmann, T
1997-01-01
Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.
CFD study of mixing miscible liquid with high viscosity difference in a stirred tank
NASA Astrophysics Data System (ADS)
Madhania, S.; Cahyani, A. B.; Nurtono, T.; Muharam, Y.; Winardi, S.; Purwanto, W. W.
2018-03-01
The mixing process of miscible liquids with high viscosity difference is crucial role even though the solution mutually dissolved. This paper describes the mixing behaviour of the water-molasses system in a conical-bottomed cylindrical stirred tank (D = 0.28 m and H = 0.395 m) equipped with a side-entry Marine propeller (d = 0.036 m) under the turbulence regime using a three-dimensional and transient CFD-simulation. The objective of this work is to compare the solution strategies was applied in the computational analysis to capture the detail phenomena of mixing two miscible liquid with high viscosity difference. Four solution strategies that have been used are the RANS Standards k-ε (SKE) model as the turbulence model coupled with the Multiple Reference Frame (MRF) method for impeller motion, the RANS Realizable k-ε (RKE) combine with the MRF, the Large Eddy Simulation (LES) coupled with the Sliding Mesh (SM) method and the LES-MRF combination. The transient calculations were conducted with Ansys Fluent 17.1 version. The mixing behaviour and the propeller characteristic are to be compared and discussed in this work. The simulation results show the differences of flow pattern and the molasses distribution profile for every solution strategy. The variation of the flow pattern which happened in each solution strategy showing an instability of the mixing process in stirred tank. The LES-SM strategy shows the realistic direction of flow than another solution strategies.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
Profit through predictability: The MRF difference at optimax
NASA Astrophysics Data System (ADS)
Light, Brandon
2007-05-01
In the manufacturing business, there is one product that matters, money. Whether making shoelaces or aircraft carriers a business that doesn't also make a profit doesn't stay around long. Being able to predict operational expenses is critical to determining a product's sale price. Priced too high a product won't sell, too low profit goes away. In the business of precision optics manufacturing, predictability has been often impossible or had large error bars. Manufacturing unpredictability made setting price a challenge. What if predictability could improve by changing the polishing process? Would a predictable, deterministic process lead to profit? Optimax Systems has experienced exactly that. Incorporating Magnetorheological Finishing (MRF) into its finishing process, Optimax saw parts categorized financially as "high risk" become a routine product of higher quality, delivered on time and within budget. Using actual production figures, this presentation will show how much incorporating MRF reduced costs, improved output and increased quality all at the same time.
NASA Technical Reports Server (NTRS)
Inobe, Manabu; Inobe, Ikuko; Adams, Gregory R.; Baldwin, Kenneth M.; Takeda, Shin'Ichi
2002-01-01
To clarify the role of gravity in the postnatal development of skeletal muscle, we exposed neonatal rats at 7 days of age to microgravity. After 16 days of spaceflight, tibialis anterior, plantaris, medial gastrocnemius, and soleus muscles were removed from the hindlimb musculature and examined for the expression of MyoD-family transcription factors such as MyoD, myogenin, and MRF4. For this purpose, we established a unique semiquantitative method, based on RT-PCR, using specific primers tagged with infrared fluorescence. The relative expression of MyoD in the tibialis anterior and plantaris muscles and that of myogenin in the plantaris and soleus muscles were significantly reduced (P < 0.001) in the flight animals. In contrast, MRF4 expression was not changed in any muscle. These results suggest that MyoD and myogenin, but not MRF4, are sensitive to gravity-related stimuli in some skeletal muscles during postnatal development.
Nondimensional scaling of magnetorheological rotary shear mode devices using the Mason number
NASA Astrophysics Data System (ADS)
Becnel, Andrew C.; Sherman, Stephen; Hu, Wei; Wereley, Norman M.
2015-04-01
Magnetorheological fluids (MRFs) exhibit rapidly adjustable viscosity in the presence of a magnetic field, and are increasingly used in adaptive shock absorbers for high speed impacts, corresponding to high fluid shear rates. However, the MRF properties are typically measured at very low (γ ˙<1000 s-1) shear rates due to limited commercial rheometer capabilities. A custom high shear rate (γ ˙>10,000 s-1) Searle cell magnetorheometer, along with a full scale rotary-vane magnetorheological energy absorber (γ ˙>25,000 s-1) are employed to analyze MRF property scaling across shear rates using a nondimensional Mason number to generate an MRF master curve. Incorporating a Reynolds temperature correction factor, data from both experiments is shown to collapse to a single master curve, supporting the use of Mason number to correlate low- and high-shear rate characterization data.
NASA Astrophysics Data System (ADS)
Ji, Fang; Xu, Min; Wang, Baorui; Wang, Chao; Li, Xiaoyuan; Zhang, Yunfei; Zhou, Ming; Huang, Wen; Wei, Qilong; Tang, Guangping; He, Jianguo
2015-10-01
KDP is a common type of optics that is extremely difficult to polish by the conventional route. MRF is a local polishing technology based on material removal via shearing with minimal normal load and sub-surface damage. In contrast to traditional emendation on an abrasive, the MPEG soft coating is designed and prepared to modify the CIP surface to achieve a hardness matched with that of KDP because CIP inevitably takes part in the material removal during finishing. Morphology and infrared spectra are explored to prove the existence of homogeneous coating, and the improvement of MPEG for the polishing quality is validated by the analysis of roughness, turning grooves, and stress. The synthesized MPEG-coated CIP (MPEG-CIP) is chemically and physically compatible with KDP, which can be removed after cleaning. Our research exhibits the promising prospects of MPEG-CIP in KDP MRF.
Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.
2003-01-01
Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to increase the signal-to-noise ratio in a photon-noise-limited system when high levels of accuracy are desired. It is possible, using this simulation method, to select optimum combinations of band-pass, sampling interval, and signal-to-noise ratio values for a particular application that maximize identification accuracy and minimize the volume of imaging data.
A post-processing system for automated rectification and registration of spaceborne SAR imagery
NASA Technical Reports Server (NTRS)
Curlander, John C.; Kwok, Ronald; Pang, Shirley S.
1987-01-01
An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.
Feasibility of a GNSS-Probe for Creating Digital Maps of High Accuracy and Integrity
NASA Astrophysics Data System (ADS)
Vartziotis, Dimitris; Poulis, Alkis; Minogiannis, Alexandros; Siozos, Panayiotis; Goudas, Iraklis; Samson, Jaron; Tossaint, Michel
The “ROADSCANNER” project addresses the need for increased accuracy and integrity Digital Maps (DM) utilizing the latest developments in GNSS, in order to provide the required datasets for novel applications, such as navigation based Safety Applications, Advanced Driver Assistance Systems (ADAS) and Digital Automotive Simulations. The activity covered in the current paper is the feasibility study, preliminary tests, initial product design and development plan for an EGNOS enabled vehicle probe. The vehicle probe will be used for generating high accuracy, high integrity and ADAS compatible digital maps of roads, employing a multiple passes methodology supported by sophisticated refinement algorithms. Furthermore, the vehicle probe will be equipped with pavement scanning and other data fusion equipment, in order to produce 3D road surface models compatible with standards of road-tire simulation applications. The project was assigned to NIKI Ltd under the 1st Call for Ideas in the frame of the ESA - Greece Task Force.
Finland Validation of the New Blended Snow Product
NASA Technical Reports Server (NTRS)
Kim, E. J.; Casey, K. A.; Hallikainen, M. T.; Foster, J. L.; Hall, D. K.; Riggs, G. A.
2008-01-01
As part of an ongoing effort to validate satellite remote sensing snow products for the recentlydeveloped U.S. Air Force Weather Agency (AFWA) - NASA blended snow product, Satellite and in-situ data for snow extent and snow water equivalent (SWE) are evaluated in Finland for the 2006-2007 snow season Finnish Meteorological Institute (FMI) daily weather station data and Finnish Environment Institute (SYKE) bi-monthly snow course data are used as ground truth. Initial comparison results display positive agreement between the AFWA NASA Snow Algorithm (ANSA) snow extent and SWE maps and in situ data, with discrepancies in accordance with known AMSR-E and MODIS snow mapping limitations. Future ANSA product improvement plans include additional validation and inclusion of fractional snow cover in the ANSA data product. Furthermore, the AMSR-E 19 GHz (horizontal channel) with the difference between ascending and descending satellite passes (Diurnal Amplitude Variations, DAV) will be used to detect the onset of melt, and QuikSCAT scatterometer data (14 GHz) will be used to map areas of actively melting snow.
PTBS segmentation scheme for synthetic aperture radar
NASA Astrophysics Data System (ADS)
Friedland, Noah S.; Rothwell, Brian J.
1995-07-01
The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
Use of MRF residue as alternative fuel in cement production.
Fyffe, John R; Breckel, Alex C; Townsend, Aaron K; Webber, Michael E
2016-01-01
Single-stream recycling has helped divert millions of metric tons of waste from landfills in the U.S., where recycling rates for municipal solid waste are currently over 30%. However, material recovery facilities (MRFs) that sort the municipal recycled streams do not recover 100% of the incoming material. Consequently, they landfill between 5% and 15% of total processed material as residue. This residue is primarily composed of high-energy-content non-recycled plastics and fiber. One possible end-of-life solution for these energy-dense materials is to process the residue into Solid Recovered Fuel (SRF) that can be used as an alternative energy resource capable of replacing or supplementing fuel resources such as coal, natural gas, petroleum coke, or biomass in many industrial and power production processes. This report addresses the energetic and environmental benefits and trade-offs of converting non-recycled post-consumer plastics and fiber derived from MRF residue streams into SRF for use in a cement kiln. An experimental test burn of 118 Mg of SRF in the precalciner portion of the cement kiln was conducted. The SRF was a blend of 60% MRF residue and 40% post-industrial waste products producing an estimated 60% plastic and 40% fibrous material mixture. The SRF was fed into the kiln at 0.9 Mg/h for 24h and then 1.8 Mg/h for the following 48 h. The emissions data recorded in the experimental test burn were used to perform the life-cycle analysis portion of this study. The analysis included the following steps: transportation, landfill, processing and fuel combustion at the cement kiln. The energy use and emissions at each step is tracked for the two cases: (1) The Reference Case, where MRF residue is disposed of in a landfill and the cement kiln uses coal as its fuel source, and (2) The SRF Case, in which MRF residue is processed into SRF and used to offset some portion of coal use at the cement kiln. The experimental test burn and accompanying analysis indicate that using MRF residue to produce SRF for use in cement kilns is likely an advantageous alternative to disposal of the residue in landfills. The use of SRF can offset fossil fuel use, reduce CO2 emissions, and divert energy-dense materials away from landfills. For this test-case, the use of SRF offset between 7700 and 8700 Mg of coal use, reduced CO2 emissions by at least 1.4%, and diverted over 7950 Mg of energy-dense materials away from landfills. In addition, emissions were reduced by at least 19% for SO2, while NOX emissions increased by between 16% and 24%. Changes in emissions of particulate matter, mercury, hydrogen chloride, and total-hydrocarbons were all less than plus or minus 2.2%, however these emissions were not measured at the cement kiln. Co-location of MRFs, SRF production facilities, and landfills can increase the benefits of SRF use even further by reducing transportation requirements. Copyright © 2015 Elsevier Ltd. All rights reserved.
Texture Analysis of Chaotic Coupled Map Lattices Based Image Encryption Algorithm
NASA Astrophysics Data System (ADS)
Khan, Majid; Shah, Tariq; Batool, Syeda Iram
2014-09-01
As of late, data security is key in different enclosures like web correspondence, media frameworks, therapeutic imaging, telemedicine and military correspondence. In any case, a large portion of them confronted with a few issues, for example, the absence of heartiness and security. In this letter, in the wake of exploring the fundamental purposes of the chaotic trigonometric maps and the coupled map lattices, we have presented the algorithm of chaos-based image encryption based on coupled map lattices. The proposed mechanism diminishes intermittent impact of the ergodic dynamical systems in the chaos-based image encryption. To assess the security of the encoded image of this scheme, the association of two nearby pixels and composition peculiarities were performed. This algorithm tries to minimize the problems arises in image encryption.
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. Allmore » phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.« less
1987-06-30
nucleus reticularis gigantocellularis. No distinct tracts were reported in the brainstem as far rostral as the superior olivary complex. At the level...that stimulation of the PAG 12 activates neurons which project to the MRF, specifically the nucleus reticularis gigantocellularis, nucleus ... reticularis magnocellularis and the nucleus raphe magnus. Neurons in the raphe magnus receive convergent input from the PAG and other MRF regions and, via
NASA Technical Reports Server (NTRS)
Mori, R. L.; Bergsman, A. E.; Holmes, M. J.; Yates, B. J.
2001-01-01
Changes in posture can affect the resting length of respiratory muscles, requiring alterations in the activity of these muscles if ventilation is to be unaffected. Recent studies have shown that the vestibular system contributes to altering respiratory muscle activity during movement and changes in posture. Furthermore, anatomical studies have demonstrated that many bulbospinal neurons in the medial medullary reticular formation (MRF) provide inputs to phrenic and abdominal motoneurons; because this region of the reticular formation receives substantial vestibular and other movement-related input, it seems likely that medial medullary reticulospinal neurons could adjust the activity of respiratory motoneurons during postural alterations. The objective of the present study was to determine whether functional lesions of the MRF affect inspiratory and expiratory muscle responses to activation of the vestibular system. Lidocaine or muscimol injections into the MRF produced a large increase in diaphragm and abdominal muscle responses to vestibular stimulation. These vestibulo-respiratory responses were eliminated following subsequent chemical blockade of descending pathways in the lateral medulla. However, inactivation of pathways coursing through the lateral medulla eliminated excitatory, but not inhibitory, components of vestibulo-respiratory responses. The simplest explanation for these data is that MRF neurons that receive input from the vestibular nuclei make inhibitory connections with diaphragm and abdominal motoneurons, whereas a pathway that courses laterally in the caudal medulla provides excitatory vestibular inputs to these motoneurons.
Using deconvolution to improve the metrological performance of the grid method
NASA Astrophysics Data System (ADS)
Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis
2013-06-01
The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.
A tool to include gamma analysis software into a quality assurance program.
Agnew, Christina E; McGarry, Conor K
2016-03-01
To provide a tool to enable gamma analysis software algorithms to be included in a quality assurance (QA) program. Four image sets were created comprising two geometric images to independently test the distance to agreement (DTA) and dose difference (DD) elements of the gamma algorithm, a clinical step and shoot IMRT field and a clinical VMAT arc. The images were analysed using global and local gamma analysis with 2 in-house and 8 commercially available software encompassing 15 software versions. The effect of image resolution on gamma pass rates was also investigated. All but one software accurately calculated the gamma passing rate for the geometric images. Variation in global gamma passing rates of 1% at 3%/3mm and over 2% at 1%/1mm was measured between software and software versions with analysis of appropriately sampled images. This study provides a suite of test images and the gamma pass rates achieved for a selection of commercially available software. This image suite will enable validation of gamma analysis software within a QA program and provide a frame of reference by which to compare results reported in the literature from various manufacturers and software versions. Copyright © 2015. Published by Elsevier Ireland Ltd.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Fusion of infrared and visible images based on saliency scale-space in frequency domain
NASA Astrophysics Data System (ADS)
Chen, Yanfei; Sang, Nong; Dan, Zhiping
2015-12-01
A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.
A pseudoinverse deformation vector field generator and its applications
Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.
2010-01-01
Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247
Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.
La, Hung Manh; Sheng, Weihua
2013-04-01
In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.
UAVSAR: Airborne L-band Radar for Repeat Pass Interferometry
NASA Technical Reports Server (NTRS)
Moes, Timothy R.
2009-01-01
The primary objectives of the UAVSAR Project were to: a) develop a miniaturized polarimetric L-band synthetic aperture radar (SAR) for use on an unmanned aerial vehicle (UAV) or piloted vehicle. b) develop the associated processing algorithms for repeat-pass differential interferometric measurements using a single antenna. c) conduct measurements of geophysical interest, particularly changes of rapidly deforming surfaces such as volcanoes or earthquakes. Two complete systems were developed. Operational Science Missions began on February 18, 2009 ... concurrent development and testing of the radar system continues.
Hyperspectral feature mapping classification based on mathematical morphology
NASA Astrophysics Data System (ADS)
Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli
2016-03-01
This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.
NASA Astrophysics Data System (ADS)
Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar
2016-06-01
There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
Li, Haisen S; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S; Chetty, Indrin J
2014-01-06
The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.
NASA Astrophysics Data System (ADS)
Li, Haisen S.; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S.; Chetty, Indrin J.
2014-01-01
The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.
Development of a Two-Wheel Contingency Mode for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
Starin, Scott R.; ODonnell, James R., Jr.; Bauer, Frank H. (Technical Monitor)
2002-01-01
In the event of a failure of one of MAP's three reaction wheel assemblies (RWAs), it is not possible to achieve three-axis, full-state attitude control using the remaining two wheels. Hence, two of the attitude control algorithms implemented on the MAP spacecraft will no longer be usable in their current forms: Inertial Mode, used for slewing to and holding inertial attitudes, and Observing Mode, which implements the nominal dual-spin science mode. This paper describes the effort to create a complete strategy for using software algorithms to cope with a RWA failure. The discussion of the design process will be divided into three main subtopics: performing orbit maneuvers to reach and maintain an orbit about the second Earth-Sun libration point in the event of a RWA failure, completing the mission using a momentum-bias two-wheel science mode, and developing a new thruster-based mode for adjusting the inertially fixed momentum bias. In this summary, the philosophies used in designing these changes is shown; the full paper will supplement these with algorithm descriptions and testing results.
Advances of the smooth variable structure filter: square-root and two-pass formulations
NASA Astrophysics Data System (ADS)
Gadsden, S. Andrew; Lee, Andrew S.
2017-01-01
The smooth variable structure filter (SVSF) has seen significant development and research activity in recent years. It is based on sliding mode concepts, which utilize a switching gain that brings an inherent amount of stability to the estimation process. In an effort to improve upon the numerical stability of the SVSF, a square-root formulation is derived. The square-root SVSF is based on Potter's algorithm. The proposed formulation is computationally more efficient and reduces the risks of failure due to numerical instability. The new strategy is applied on target tracking scenarios for the purposes of state estimation, and the results are compared with the popular Kalman filter. In addition, the SVSF is reformulated to present a two-pass smoother based on the SVSF gain. The proposed method is applied on an aerospace flight surface actuator, and the results are compared with the Kalman-based two-pass smoother.
Performance Measures for Adaptive Decisioning Systems
1991-09-11
set to hypothesis space mapping best approximates the known map. Two assumptions, a sufficiently representative training set and the ability of the...successful prediction of LINEXT performance. The LINEXT algorithm above performs the decision space mapping on the training-set ele- ments exactly. For a
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-08-27
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.
Shafrir, Shai N; Lambropoulos, John C; Jacobs, Stephen D
2007-08-01
We demonstrate the use of spots taken with magnetorheological finishing (MRF) for estimating subsurface damage (SSD) depth from deterministic microgrinding for three hard ceramics: aluminum oxynitride (Al(23)O(27)N(5)/ALON), polycrystalline alumina (Al(2)O(3)/PCA), and chemical vapor deposited (CVD) silicon carbide (Si(4)C/SiC). Using various microscopy techniques to characterize the surfaces, we find that the evolution of surface microroughness with the amount of material removed shows two stages. In the first, the damaged layer and SSD induced by microgrinding are removed, and the surface microroughness reaches a low value. Peak-to-valley (p-v) surface microroughness induced from grinding gives a measure of the SSD depth in the first stage. With the removal of additional material, a second stage develops, wherein the interaction of MRF and the material's microstructure is revealed. We study the development of this texture for these hard ceramics with the use of power spectral density to characterize surface features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafrir, S.N.; Lambropoulos, J.C.; Jacobs, S.D.
2007-08-01
We demonstrate the use of spots taken with magnetorheological finishing (MRF) for estimating subsurface damage (SSD) depth from deterministic microgrinding for three hard ceramics: aluminum oxynitride (Al23O27N5/ALON), polycrystalline alumina (AL2O3/PCA), and chemical vapor deposited (CVD) silicon carbide (Si4C/SiC). Using various microscopy techniques to characterize the surfaces, we find that the evolution of surface microroughness with the amount of material removed shows two stages. In the first, the damaged layer and SSD induced by microgrinding are removed, and the surface roughness reaches a low value. Peak-to-valley (p-v) surface microroughness induced from grinding gives a measure of the SSD depth in themore » first stage. With the removal of additional material, a second stage develops, wherein the interaction of MRF and the material's microstructure is revealed. We study the development of this texture for these har ceramics with the use of power spectral density to characterize surface features.« less
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-01-01
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656
A scalable and practical one-pass clustering algorithm for recommender system
NASA Astrophysics Data System (ADS)
Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali
2015-12-01
KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.
Path Planning for Non-Circular, Non-Holonomic Robots in Highly Cluttered Environments.
Samaniego, Ricardo; Lopez, Joaquin; Vazquez, Fernando
2017-08-15
This paper presents an algorithm for finding a solution to the problem of planning a feasible path for a slender autonomous mobile robot in a large and cluttered environment. The presented approach is based on performing a graph search on a kinodynamic-feasible lattice state space of high resolution; however, the technique is applicable to many search algorithms. With the purpose of allowing the algorithm to consider paths that take the robot through narrow passes and close to obstacles, high resolutions are used for the lattice space and the control set. This introduces new challenges because one of the most computationally expensive parts of path search based planning algorithms is calculating the cost of each one of the actions or steps that could potentially be part of the trajectory. The reason for this is that the evaluation of each one of these actions involves convolving the robot's footprint with a portion of a local map to evaluate the possibility of a collision, an operation that grows exponentially as the resolution is increased. The novel approach presented here reduces the need for these convolutions by using a set of offline precomputed maps that are updated, by means of a partial convolution, as new information arrives from sensors or other sources. Not only does this improve run-time performance, but it also provides support for dynamic search in changing environments. A set of alternative fast convolution methods are also proposed, depending on whether the environment is cluttered with obstacles or not. Finally, we provide both theoretical and experimental results from different experiments and applications.
Ni, Yu-Fei; Li, Jun; Wang, Ben-Fu; Jiang, Song-He; Chen, Yi; Zhang, Wei-Feng; Lian, Qing-Quan
2009-10-01
To observe the effect of electroacupuncture (EA) on bispectral index (BIS) and plasma beta-endorphin (beta-EP) level in patients undergoing colonoscopy. Sixty patients were equally randomized into EA group and control group with 30 cases in each. EA (2 Hz/100 Hz, 4-6 V) was applied to the right Zusanli (ST 36) and Shangjuxu (ST 37), and the left Yinlingquan (SP 9), Sanyinjiao (SP 6) and bilateral Hegu (LI 4) respectively 30 min before colonoscopy. The mean arterial pressure (MAP), heart rate (HR) and BIS in two groups were continuously monitored during the study. Plasma beta-EP concentration was detected by radioimmunoassay. The patient's adverse reactions (including pain, satisfaction degree, etc.) were evaluated by visual analog scale (VAS) and verbal stress scale (VSS). Self-comparison showed that MAP and HR in control group increased significantly during colonoscope's splenic flexure passing (P<0.05). Whereas the 2 indexes in EA group had no significant changes during colonoscope insertion, and its splenic flexure passing, hepatic flexure passing and post-enteroscopy (P>0.05). Comparison between two groups showed that MAP at the time-point of colonoscope insertion, and HR at the time-point of colonoscope's splenic flexure passing in EA group were significantly lower than those in control group (P<0.05). BIS values of EA group were significantly lower than those of control group at different time-points after colonoscope insertion (P<0.01). Plasma beta-EP concentrations at the time-points of colonoscope's hepatic flexure passing and post-enteroscopy were evidently increased in both groups in comparison with pre-enteroscopy (P<0.01), and beta-EP was significantly lower in EA group than that in control group at the time-point of colonoscope's hepatic flexure passing (P<0.05). The dosage of Midazolam used for conscious-sedation and the scores of VAS and VSS were also considerably lower in EA group than those in control group (P<0.05, P<0.01). No significant differences were found between two groups in the adverse reactions as dizziness, nausea, vomiting and abdominal pain, but the patients' satisfaction degree in EA group was evidently higher than that in control group (P<0.05). Acupuncture analgesia can effectively lower the colonoscopy patients' BIS value and plasma beta-EP level, meaning attenuation of the patients' stress responses during colonoscopy after EA.
Optimal Co-segmentation of Tumor in PET-CT Images with Context Information
Song, Qi; Bai, Junjie; Han, Dongfeng; Bhatia, Sudershan; Sun, Wenqing; Rockey, William; Bayouth, John E.; Buatti, John M.
2014-01-01
PET-CT images have been widely used in clinical practice for radiotherapy treatment planning of the radiotherapy. Many existing segmentation approaches only work for a single imaging modality, which suffer from the low spatial resolution in PET or low contrast in CT. In this work we propose a novel method for the co-segmentation of the tumor in both PET and CT images, which makes use of advantages from each modality: the functionality information from PET and the anatomical structure information from CT. The approach formulates the segmentation problem as a minimization problem of a Markov Random Field (MRF) model, which encodes the information from both modalities. The optimization is solved using a graph-cut based method. Two sub-graphs are constructed for the segmentation of the PET and the CT images, respectively. To achieve consistent results in two modalities, an adaptive context cost is enforced by adding context arcs between the two subgraphs. An optimal solution can be obtained by solving a single maximum flow problem, which leads to simultaneous segmentation of the tumor volumes in both modalities. The proposed algorithm was validated in robust delineation of lung tumors on 23 PET-CT datasets and two head-and-neck cancer subjects. Both qualitative and quantitative results show significant improvement compared to the graph cut methods solely using PET or CT. PMID:23693127
Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa
2015-11-15
Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interest<100% of cycle length (CL); <95% tachycardia CL mapped; variability of CL and/or unstable fiducial reference marker; and suboptimal bar height and scar settings. A data collection and map interpretation algorithm has been developed to optimize Ripple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Mapping bathymetry in an active surf zone with the WorldView2 multispectral satellite
NASA Astrophysics Data System (ADS)
Trimble, S. M.; Houser, C.; Brander, R.; Chirico, P.
2015-12-01
Rip currents are strong, narrow seaward flows of water that originate in the surf zones of many global beaches. They are related to hundreds of international drownings each year, but exact numbers are difficult to calculate due to logistical difficulties in obtaining accurate incident reports. Annual average rip current fatalities are estimated to be ~100, 53 and 21 in the United States (US), Costa Rica, and Australia respectively. Current warning systems (e.g. National Weather Service) do not account for fine resolution nearshore bathymetry because it is difficult to capture. The method shown here could provide frequent, high resolution maps of nearshore bathymetry at a scale required for improved rip prediction and warning. This study demonstrates a method for mapping bathymetry in the surf zone (20m deep and less), specifically within rip channels, because rips form at topographically low spots in the bathymetry as a result of feedback amongst waves, substrate, and antecedent bathymetry. The methods employ the Digital Globe WorldView2 (WV2) multispectral satellite and field measurements of depth to generate maps of the changing bathymetry at two embayed, rip-prone beaches: Playa Cocles, Puerto Viejo de Talamanca, Costa Rica, and Bondi Beach, Sydney, Australia. WV2 has a 1.1 day pass-over rate with 1.84m ground pixel resolution of 8 bands, including 'yellow' (585-625 nm) and 'coastal blue' (400-450 nm). The data is used to classify bottom type and to map depth to the return in multiple bands. The methodology is tested at each site for algorithm consistency between dates, and again for applicability between sites.
Fast object detection algorithm based on HOG and CNN
NASA Astrophysics Data System (ADS)
Lu, Tongwei; Wang, Dandan; Zhang, Yanduo
2018-04-01
In the field of computer vision, object classification and object detection are widely used in many fields. The traditional object detection have two main problems:one is that sliding window of the regional selection strategy is high time complexity and have window redundancy. And the other one is that Robustness of the feature is not well. In order to solve those problems, Regional Proposal Network (RPN) is used to select candidate regions instead of selective search algorithm. Compared with traditional algorithms and selective search algorithms, RPN has higher efficiency and accuracy. We combine HOG feature and convolution neural network (CNN) to extract features. And we use SVM to classify. For TorontoNet, our algorithm's mAP is 1.6 percentage points higher. For OxfordNet, our algorithm's mAP is 1.3 percentage higher.
Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E
2017-05-01
Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Dakin, Helen; Abel, Lucy; Burns, Richéal; Yang, Yaling
2018-02-12
The Health Economics Research Centre (HERC) Database of Mapping Studies was established in 2013, based on a systematic review of studies developing mapping algorithms predicting EQ-5D. The Mapping onto Preference-based measures reporting Standards (MAPS) statement was published in 2015 to improve reporting of mapping studies. We aimed to update the systematic review and assess the extent to which recently-published studies mapping condition-specific quality of life or clinical measures to the EQ-5D follow the guidelines published in the MAPS Reporting Statement. A published systematic review was updated using the original inclusion criteria to include studies published by December 2016. We included studies reporting novel algorithms mapping from any clinical measure or patient-reported quality of life measure to either the EQ-5D-3L or EQ-5D-5L. Titles and abstracts of all identified studies and the full text of papers published in 2016 were assessed against the MAPS checklist. The systematic review identified 144 mapping studies reporting 190 algorithms mapping from 110 different source instruments to EQ-5D. Of the 17 studies published in 2016, nine (53%) had titles that followed the MAPS statement guidance, although only two (12%) had abstracts that fully addressed all MAPS items. When the full text of these papers was assessed against the complete MAPS checklist, only two studies (12%) were found to fulfil or partly fulfil all criteria. Of the 141 papers (across all years) that included abstracts, the items on the MAPS statement checklist that were fulfilled by the largest number of studies comprised having a structured abstract (95%) and describing target instruments (91%) and source instruments (88%). The number of published mapping studies continues to increase. Our updated database provides a convenient way to identify mapping studies for use in cost-utility analysis. Most recent studies do not fully address all items on the MAPS checklist.
Making a Magnetorheological Fluid from Mining Tailings
NASA Astrophysics Data System (ADS)
Quitian, G.; Saldarriaga, W.; Rojas, N.
2017-12-01
We have obtained magnetite mining tailings and used it to fabricate a magnetorheological fluid (MRF). Mineralogical and morphological characteristics were determined using X-ray diffraction (XRD) and energy dispersive spectrometry (EDS), as well as size and geometry for the obtained magnetite. Finally, the fabricated MRF was rheologically characterized in a device attached to a rheometer. The application of a magnetic field of 0.12 Tesla can increase the viscosity of the MRF by more than 400 pct. A structural formation should occur within the fluid by a reordering of particles into magnetic columns, which are perpendicular to the flow direction. These structures give the fluid an increased viscosity. As the magnetic field increases, the structure formed is more resistant, resulting in an increased viscosity. One can appreciate that with a value equal to or less than 0.06 Tesla of applied magnetic field, many viscosity values associated with the work area of the oils can be achieved (0.025 and 0.34 Pa s).
Zirconia coated carbonyl iron particle-based magnetorheological fluid for polishing
NASA Astrophysics Data System (ADS)
Shafrir, Shai N.; Romanofsky, Henry J.; Skarlinski, Michael; Wang, Mimi; Miao, Chunlin; Salzman, Sivan; Chartier, Taylor; Mici, Joni; Lambropoulos, John C.; Shen, Rui; Yang, Hong; Jacobs, Stephen D.
2009-08-01
Aqueous magnetorheological (MR) polishing fluids used in magnetorheological finishing (MRF) have a high solids concentration consisting of magnetic carbonyl iron particles and nonmagnetic polishing abrasives. The properties of MR polishing fluids are affected over time by corrosion of CI particles. Here we report on MRF spotting experiments performed on optical glasses using a zirconia coated carbonyl iron (CI) particle-based MR fluid. The zirconia coated magnetic CI particles were prepared via sol-gel synthesis in kg quantities. The coating layer was ~50-100 nm thick, faceted in surface structure, and well adhered. Coated particles showed long term stability against aqueous corrosion. "Free" nano-crystalline zirconia polishing abrasives were co-generated in the coating process, resulting in an abrasivecharged powder for MRF. A viable MR fluid was prepared simply by adding water. Spot polishing tests were performed on a variety of optical glasses over a period of 3 weeks with no signs of MR fluid degradation or corrosion. Stable material removal rates and smooth surfaces inside spots were obtained.
UAVSAR Program: Initial Results from New Instrument Capabilities
NASA Technical Reports Server (NTRS)
Lou, Yunling; Hensley, Scott; Moghaddam, Mahta; Moller, Delwyn; Chapin, Elaine; Chau, Alexandra; Clark, Duane; Hawkins, Brian; Jones, Cathleen; Marks, Phillip;
2013-01-01
UAVSAR is an imaging radar instrument suite that serves as NASA's airborne facility instrument to acquire scientific data for Principal Investigators as well as a radar test-bed for new radar observation techniques and radar technology demonstration. Since commencing operational science observations in January 2009, the compact, reconfigurable, pod-based radar has been acquiring L-band fully polarimetric SAR (POLSAR) data with repeat-pass interferometric (RPI) observations underneath NASA Dryden's Gulfstream-III jet to provide measurements for science investigations in solid earth and cryospheric studies, vegetation mapping and land use classification, archaeological research, soil moisture mapping, geology and cold land processes. In the past year, we have made significant upgrades to add new instrument capabilities and new platform options to accommodate the increasing demand for UAVSAR to support scientific campaigns to measure subsurface soil moisture, acquire data in the polar regions, and for algorithm development, verification, and cross-calibration with other airborne/spaceborne instruments.
Special Issue on a Fault Tolerant Network on Chip Architecture
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan
2010-06-01
In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.
An adaptive SVSF-SLAM algorithm to improve the success and solving the UGVs cooperation problem
NASA Astrophysics Data System (ADS)
Demim, Fethi; Nemra, Abdelkrim; Louadj, Kahina; Hamerlain, Mustapha; Bazoula, Abdelouahab
2018-05-01
This paper aims to present a Decentralised Cooperative Simultaneous Localization and Mapping (DCSLAM) solution based on 2D laser data using an Adaptive Covariance Intersection (ACI). The ACI-DCSLAM algorithm will be validated on a swarm of Unmanned Ground Vehicles (UGVs) receiving features to estimate the position and covariance of shared features before adding them to the global map. With the proposed solution, a group of (UGVs) will be able to construct a large reliable map and localise themselves within this map without any user intervention. The most popular solutions to this problem are the EKF-SLAM, Nonlinear H-infinity ? SLAM and the FAST-SLAM. The former suffers from two important problems which are the poor consistency caused by the linearization problem and the calculation of Jacobian. The second solution is the ? which is a very promising filter because it doesn't make any assumption about noise characteristics, while the latter is not suitable for real time implementation. Therefore, a new alternative solution based on the smooth variable structure filter (SVSF) is adopted. Cooperative adaptive SVSF-SLAM algorithm is proposed in this paper to solve the UGVs SLAM problem. Our main contribution consists in adapting the SVSF filter to solve the Decentralised Cooperative SLAM problem for multiple UGVs. The algorithms developed in this paper were implemented using two mobile robots Pioneer ?, equiped with 2D laser telemetry sensors. Good results are obtained by the Cooperative adaptive SVSF-SLAM algorithm compared to the Cooperative EKF/?-SLAM algorithms, especially when the noise is colored or affected by a variable bias. Simulation results confirm and show the efficiency of the proposed algorithm which is more robust, stable and adapted to real time applications.
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
Aqil, M; Kita, I; Yano, A; Nishiyama, S
2006-01-01
It is widely accepted that an efficient flood alarm system may significantly improve public safety and mitigate economical damages caused by inundations. In this paper, a modified adaptive neuro-fuzzy system is proposed to modify the traditional neuro-fuzzy model. This new method employs a rule-correction based algorithm to replace the error back propagation algorithm that is employed by the traditional neuro-fuzzy method in backward pass calculation. The final value obtained during the backward pass calculation using the rule-correction algorithm is then considered as a mapping function of the learning mechanism of the modified neuro-fuzzy system. Effectiveness of the proposed identification technique is demonstrated through a simulation study on the flood series of the Citarum River in Indonesia. The first four-year data (1987 to 1990) was used for model training/calibration, while the other remaining data (1991 to 2002) was used for testing the model. The number of antecedent flows that should be included in the input variables was determined by two statistical methods, i.e. autocorrelation and partial autocorrelation between the variables. Performance accuracy of the model was evaluated in terms of two statistical indices, i.e. mean average percentage error and root mean square error. The algorithm was developed in a decision support system environment in order to enable users to process the data. The decision support system is found to be useful due to its interactive nature, flexibility in approach, and evolving graphical features, and can be adopted for any similar situation to predict the streamflow. The main data processing includes gauging station selection, input generation, lead-time selection/generation, and length of prediction. This program enables users to process the flood data, to train/test the model using various input options, and to visualize results. The program code consists of a set of files, which can be modified as well to match other purposes. This program may also serve as a tool for real-time flood monitoring and process control. The results indicate that the modified neuro-fuzzy model applied to the flood prediction seems to have reached encouraging results for the river basin under examination. The comparison of the modified neuro-fuzzy predictions with the observed data was satisfactory, where the error resulted from the testing period was varied between 2.632% and 5.560%. Thus, this program may also serve as a tool for real-time flood monitoring and process control.
System engineering approach to GPM retrieval algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, C. R.; Chandrasekar, V.
2004-01-01
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Groundmore » validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do calculated at each bin, the rain rate can then be calculated based on a suitable rain-rate model. This paper develops a system engineering interface to the retrieval algorithms while remaining cognizant of system engineering issues so that it can be used to bridge the divide between algorithm physics an d overall mission requirements. Additionally, in line with the systems approach, a methodology is developed such that the measurement requirements pass through the retrieval model and other subsystems and manifest themselves as measurement and other system constraints. A systems model has been developed for the retrieval algorithm that can be evaluated through system-analysis tools such as MATLAB/Simulink.« less
Routh's algorithm - A centennial survey
NASA Technical Reports Server (NTRS)
Barnett, S.; Siljak, D. D.
1977-01-01
One hundred years have passed since the publication of Routh's fundamental work on determining the stability of constant linear systems. The paper presents an outline of the algorithm and considers such aspects of it as the distribution of zeros and applications of it that relate to the greatest common divisor, the abscissa of stability, continued fractions, canonical forms, the nonnegativity of polynomials and polynomial matrices, the absolute stability, optimality and passivity of dynamic systems, and the stability of two-dimensional circuits.
Use of Multi-Resolution Wavelet Feature Pyramids for Automatic Registration of Multi-Sensor Imagery
NASA Technical Reports Server (NTRS)
Zavorin, Ilya; LeMoigne, Jacqueline
2003-01-01
The problem of image registration, or alignment of two or more images representing the same scene or object, has to be addressed in various disciplines that employ digital imaging. In the area of remote sensing, just like in medical imaging or computer vision, it is necessary to design robust, fast and widely applicable algorithms that would allow automatic registration of images generated by various imaging platforms at the same or different times, and that would provide sub-pixel accuracy. One of the main issues that needs to be addressed when developing a registration algorithm is what type of information should be extracted from the images being registered, to be used in the search for the geometric transformation that best aligns them. The main objective of this paper is to evaluate several wavelet pyramids that may be used both for invariant feature extraction and for representing images at multiple spatial resolutions to accelerate registration. We find that the band-pass wavelets obtained from the Steerable Pyramid due to Simoncelli perform better than two types of low-pass pyramids when the images being registered have relatively small amount of nonlinear radiometric variations between them. Based on these findings, we propose a modification of a gradient-based registration algorithm that has recently been developed for medical data. We test the modified algorithm on several sets of real and synthetic satellite imagery.
Clustering of color map pixels: an interactive approach
NASA Astrophysics Data System (ADS)
Moon, Yiu Sang; Luk, Franklin T.; Yuen, K. N.; Yeung, Hoi Wo
2003-12-01
The demand for digital maps continues to arise as mobile electronic devices become more popular nowadays. Instead of creating the entire map from void, we may convert a scanned paper map into a digital one. Color clustering is the very first step of the conversion process. Currently, most of the existing clustering algorithms are fully automatic. They are fast and efficient but may not work well in map conversion because of the numerous ambiguous issues associated with printed maps. Here we introduce two interactive approaches for color clustering on the map: color clustering with pre-calculated index colors (PCIC) and color clustering with pre-calculated color ranges (PCCR). We also introduce a memory model that could enhance and integrate different image processing techniques for fine-tuning the clustering results. Problems and examples of the algorithms are discussed in the paper.
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727
Partial differential equation transform — Variational formulation and Fourier analysis
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
Nonlinear partial differential equation (PDE) models are established approaches for image/signal processing, data analysis and surface construction. Most previous geometric PDEs are utilized as low-pass filters which give rise to image trend information. In an earlier work, we introduced mode decomposition evolution equations (MoDEEs), which behave like high-pass filters and are able to systematically provide intrinsic mode functions (IMFs) of signals and images. Due to their tunable time-frequency localization and perfect reconstruction, the operation of MoDEEs is called a PDE transform. By appropriate selection of PDE transform parameters, we can tune IMFs into trends, edges, textures, noise etc., which can be further utilized in the secondary processing for various purposes. This work introduces the variational formulation, performs the Fourier analysis, and conducts biomedical and biological applications of the proposed PDE transform. The variational formulation offers an algorithm to incorporate two image functions and two sets of low-pass PDE operators in the total energy functional. Two low-pass PDE operators have different signs, leading to energy disparity, while a coupling term, acting as a relative fidelity of two image functions, is introduced to reduce the disparity of two energy components. We construct variational PDE transforms by using Euler-Lagrange equation and artificial time propagation. Fourier analysis of a simplified PDE transform is presented to shed light on the filter properties of high order PDE transforms. Such an analysis also offers insight on the parameter selection of the PDE transform. The proposed PDE transform algorithm is validated by numerous benchmark tests. In one selected challenging example, we illustrate the ability of PDE transform to separate two adjacent frequencies of sin(x) and sin(1.1x). Such an ability is due to PDE transform’s controllable frequency localization obtained by adjusting the order of PDEs. The frequency selection is achieved either by diffusion coefficients or by propagation time. Finally, we explore a large number of practical applications to further demonstrate the utility of proposed PDE transform. PMID:22207904
A Two-Wheel Observing Mode for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
Starin, Scott R.; ODonnell, James R., Jr.
2001-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE). Due to the MAP project's limited mass, power, and budget, a traditional reliability concept including fully redundant components was not feasible. The MAP design employs selective hardware redundancy, along with backup software modes and algorithms, to improve the odds of mission success. This paper describes the effort to develop a backup control mode, known as Observing II, that will allow the MAP science mission to continue in the event of a failure of one of its three reaction wheel assemblies. This backup science mode requires a change from MAP's nominal zero-momentum control system to a momentum-bias system. In this system, existing thruster-based control modes are used to establish a momentum bias about the sun line sufficient to spin the spacecraft up to the desired scan rate. Natural spacecraft dynamics exhibits spin and nutation similar to the nominal MAP science mode with different relative rotation rates, so the two reaction wheels are used to establish and maintain the desired nutation angle from the sun line. Detailed descriptions of the ObservingII control algorithm and simulation results will be presented, along with the operational considerations of performing the rest of MAP's necessary functions with only two wheels.
Targeting accuracy of single-isocenter intensity-modulated radiosurgery for multiple lesions.
Calvo-Ortega, J F; Pozo, M; Moragues, S; Casals, J
2017-01-01
To investigate the targeting accuracy of intensity-modulated SRS (IMRS) plans designed to simultaneously treat multiple brain metastases with a single isocenter. A home-made acrylic phantom able to support a film (EBT3) in its coronal plane was used. The phantom was CT scanned and three coplanar small targets (a central and two peripheral) were outlined in the Eclipse system. Peripheral targets were 6 cm apart from the central one. A reference IMRS plan was designed to simultaneously treat the three targets, but only a single isocenter located at the center of the central target was used. After positioning the phantom on the linac using the room lasers, a CBCT scan was acquired and the reference plan were mapped on it, by placing the planned isocenter at the intersection of the landmarks used in the film showing the linac isocenter. The mapped plan was then recalculated and delivered. The film dose distribution was derived using a cloud computing application (www.radiochromic.com) that uses a triple-channel dosimetry algorithm. Comparison of dose distributions using the gamma index (5%/1 mm) were performed over a 5 × 5 cm 2 region centered over each target. 2D shifts required to get the best gamma passing rates on the peripheral target regions were compared with the reported ones for the central target. The experiment was repeated ten times in different sessions. Average 2D shifts required to achieve optimal gamma passing rates (99%, 97%, 99%) were 0.7 mm (SD: 0.3 mm), 0.8 mm (SD: 0.4 mm) and 0.8 mm (SD: 0.3 mm), for the central and the two peripheral targets, respectively. No statistical differences (p > 0.05) were found for targeting accuracy between the central and the two peripheral targets. The study revealed a targeting accuracy within 1 mm for off-isocenter targets within 6 cm of the linac isocenter, when a single-isocenter IMRS plan is designed. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deffet, S; Macq, B; Farace, P
2016-06-15
Purpose: The conversion from Hounsfield units (HU) to stopping powers is a major source of range uncertainty in proton therapy (PT). Our contribution shows how proton radiographs (PR) acquired with a multi-layer ionization chamber in a PT center can be used for accurate patient positioning and subsequently for patient-specific optimization of the conversion from HU to stopping powers. Methods: A multi-layer ionization chamber was used to measure the integral depth-dose (IDD) of 220 MeV pencil beam spots passing through several anthropomorphic phantoms. The whole area of interest was imaged by repositioning the couch and by acquiring a 45×45 mm{sup 2}more » frame for each position. A rigid registration algorithm was implemented to correct the positioning error between the proton radiographs and the planning CT. After registration, the stopping power map obtained from the planning CT with the calibration curve of the treatment planning system was used together with the water equivalent thickness gained from two proton radiographs to generate a phantom-specific stopping power map. Results: Our results show that it is possible to make a registration with submillimeter accuracy from proton radiography obtained by sending beamlets separated by more than 1 mm. This was made possible by the complex shape of the IDD due to the presence of lateral heterogeneities along the path of the beam. Submillimeter positioning was still possible with a 5 mm spot spacing. Phantom specific stopping power maps obtained by minimizing the range error were cross-verified by the acquisition of an additional proton radiography where the phantom was positioned in a random but known manner. Conclusion: Our results indicate that a CT-PR registration algorithm together with range-error based optimization can be used to produce a patient-specific stopping power map. Sylvain Deffet reports financial funding of its PhD thesis by Ion Beam Applications (IBA) during the confines of the study and outside the submitted work. Francois Vander Stappen reports being employed by Ion Beam Applications (IBA) during the confines of the study and outside the submitted work.« less
A closed-form solution to tensor voting: theory and applications.
Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard
2012-08-01
We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486
Virtual Network Embedding via Monte Carlo Tree Search.
Haeri, Soroush; Trajkovic, Ljiljana
2018-02-01
Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
Static vs. dynamic decoding algorithms in a non-invasive body-machine interface
Seáñez-González, Ismael; Pierella, Camilla; Farshchiansadegh, Ali; Thorp, Elias B.; Abdollahi, Farnaz; Pedersen, Jessica; Mussa-Ivaldi, Ferdinando A.
2017-01-01
In this study, we consider a non-invasive body-machine interface that captures body motions still available to people with spinal cord injury (SCI) and maps them into a set of signals for controlling a computer user interface while engaging in a sustained level of mobility and exercise. We compare the effectiveness of two decoding algorithms that transform a high-dimensional body-signal vector into a lower dimensional control vector on 6 subjects with high-level SCI and 8 controls. One algorithm is based on a static map from current body signals to the current value of the control vector set through principal component analysis (PCA), the other on dynamic mapping a segment of body signals to the value and the temporal derivatives of the control vector set through a Kalman filter. SCI and control participants performed straighter and smoother cursor movements with the Kalman algorithm during center-out reaching, but their movements were faster and more precise when using PCA. All participants were able to use the BMI’s continuous, two-dimensional control to type on a virtual keyboard and play pong, and performance with both algorithms was comparable. However, seven of eight control participants preferred PCA as their method of virtual wheelchair control. The unsupervised PCA algorithm was easier to train and seemed sufficient to achieve a higher degree of learnability and perceived ease of use. PMID:28092564
Wen, Qing; Kim, Chang-Sik; Hamilton, Peter W; Zhang, Shu-Dong
2016-05-11
Gene expression connectivity mapping has gained much popularity recently with a number of successful applications in biomedical research testifying its utility and promise. Previously methodological research in connectivity mapping mainly focused on two of the key components in the framework, namely, the reference gene expression profiles and the connectivity mapping algorithms. The other key component in this framework, the query gene signature, has been left to users to construct without much consensus on how this should be done, albeit it has been an issue most relevant to end users. As a key input to the connectivity mapping process, gene signature is crucially important in returning biologically meaningful and relevant results. This paper intends to formulate a standardized procedure for constructing high quality gene signatures from a user's perspective. We describe a two-stage process for making quality gene signatures using gene expression data as initial inputs. First, a differential gene expression analysis comparing two distinct biological states; only the genes that have passed stringent statistical criteria are considered in the second stage of the process, which involves ranking genes based on statistical as well as biological significance. We introduce a "gene signature progression" method as a standard procedure in connectivity mapping. Starting from the highest ranked gene, we progressively determine the minimum length of the gene signature that allows connections to the reference profiles (drugs) being established with a preset target false discovery rate. We use a lung cancer dataset and a breast cancer dataset as two case studies to demonstrate how this standardized procedure works, and we show that highly relevant and interesting biological connections are returned. Of particular note is gefitinib, identified as among the candidate therapeutics in our lung cancer case study. Our gene signature was based on gene expression data from Taiwan female non-smoker lung cancer patients, while there is evidence from independent studies that gefitinib is highly effective in treating women, non-smoker or former light smoker, advanced non-small cell lung cancer patients of Asian origin. In summary, we introduced a gene signature progression method into connectivity mapping, which enables a standardized procedure for constructing high quality gene signatures. This progression method is particularly useful when the number of differentially expressed genes identified is large, and when there is a need to prioritize them to be included in the query signature. The results from two case studies demonstrate that the approach we have developed is capable of obtaining pertinent candidate drugs with high precision.
Managing coherence via put/get windows
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY
2011-01-11
A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
Managing coherence via put/get windows
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY
2012-02-21
A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
An image encryption algorithm based on 3D cellular automata and chaotic maps
NASA Astrophysics Data System (ADS)
Del Rey, A. Martín; Sánchez, G. Rodríguez
2015-05-01
A novel encryption algorithm to cipher digital images is presented in this work. The digital image is rendering into a three-dimensional (3D) lattice and the protocol consists of two phases: the confusion phase where 24 chaotic Cat maps are applied and the diffusion phase where a 3D cellular automata is evolved. The encryption method is shown to be secure against the most important cryptanalytic attacks.
Assessing the external validity of algorithms to estimate EQ-5D-3L from the WOMAC.
Kiadaliri, Aliasghar A; Englund, Martin
2016-10-04
The use of mapping algorithms have been suggested as a solution to predict health utilities when no preference-based measure is included in the study. However, validity and predictive performance of these algorithms are highly variable and hence assessing the accuracy and validity of algorithms before use them in a new setting is of importance. The aim of the current study was to assess the predictive accuracy of three mapping algorithms to estimate the EQ-5D-3L from the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) among Swedish people with knee disorders. Two of these algorithms developed using ordinary least squares (OLS) models and one developed using mixture model. The data from 1078 subjects mean (SD) age 69.4 (7.2) years with frequent knee pain and/or knee osteoarthritis from the Malmö Osteoarthritis study in Sweden were used. The algorithms' performance was assessed using mean error, mean absolute error, and root mean squared error. Two types of prediction were estimated for mixture model: weighted average (WA), and conditional on estimated component (CEC). The overall mean was overpredicted by an OLS model and underpredicted by two other algorithms (P < 0.001). All predictions but the CEC predictions of mixture model had a narrower range than the observed scores (22 to 90 %). All algorithms suffered from overprediction for severe health states and underprediction for mild health states with lesser extent for mixture model. While the mixture model outperformed OLS models at the extremes of the EQ-5D-3D distribution, it underperformed around the center of the distribution. While algorithm based on mixture model reflected the distribution of EQ-5D-3L data more accurately compared with OLS models, all algorithms suffered from systematic bias. This calls for caution in applying these mapping algorithms in a new setting particularly in samples with milder knee problems than original sample. Assessing the impact of the choice of these algorithms on cost-effectiveness studies through sensitivity analysis is recommended.
NASA Astrophysics Data System (ADS)
Wang, C.; Wei, Q. L.; Huang, W.; Luo, Q.; He, J. G.; Tang, G. P.
2013-07-01
The CeO2 nanoparticles with modified surface and mean sizes distribution during 107.0 nm - 127.7 nm are used as abrasive in magnetorheological finishing (MRF) fluid. The slow rotation dispersion without shearing thinning is better than fast emulsification dispersion. Steady D-shaped finishing spots and high quality precise processing surface with PV=0.1λ, GRMS=0.002λ/cm, Rq=0.83 nm are obtained on a 435 mm x 435 mm BK7 glass under self-developed MRF apparatus.
Symmetric encryption algorithms using chaotic and non-chaotic generators: A review
Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.
2015-01-01
This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561
Metric Ranking of Invariant Networks with Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Changxia; Ge, Yong; Song, Qinbao
The management of large-scale distributed information systems relies on the effective use and modeling of monitoring data collected at various points in the distributed information systems. A promising approach is to discover invariant relationships among the monitoring data and generate invariant networks, where a node is a monitoring data source (metric) and a link indicates an invariant relationship between two monitoring data. Such an invariant network representation can help system experts to localize and diagnose the system faults by examining those broken invariant relationships and their related metrics, because system faults usually propagate among the monitoring data and eventually leadmore » to some broken invariant relationships. However, at one time, there are usually a lot of broken links (invariant relationships) within an invariant network. Without proper guidance, it is difficult for system experts to manually inspect this large number of broken links. Thus, a critical challenge is how to effectively and efficiently rank metrics (nodes) of invariant networks according to the anomaly levels of metrics. The ranked list of metrics will provide system experts with useful guidance for them to localize and diagnose the system faults. To this end, we propose to model the nodes and the broken links as a Markov Random Field (MRF), and develop an iteration algorithm to infer the anomaly of each node based on belief propagation (BP). Finally, we validate the proposed algorithm on both realworld and synthetic data sets to illustrate its effectiveness.« less
Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks
Kreshuk, Anna; Koethe, Ullrich; Pax, Elizabeth; Bock, Davi D.; Hamprecht, Fred A.
2014-01-01
We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution) of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections) is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (http://www.ilastik.org/synapse-detection/sstem). PMID:24516550
SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, D; Spaans, J; Kumaraswamy, L
Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty onmore » and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published reports or otherwise compared to the results of other users or vendors should clearly indicate whether the Measurement Uncertainty function is used.« less
NASA Astrophysics Data System (ADS)
Liu, Tao; Zhang, Wei; Yan, Shaoze
2015-10-01
In this paper, a multi-scale image enhancement algorithm based on low-passing filtering and nonlinear transformation is proposed for infrared testing image of the de-bonding defect in solid propellant rocket motors. Infrared testing images with high-level noise and low contrast are foundations for identifying defects and calculating the defects size. In order to improve quality of the infrared image, according to distribution properties of the detection image, within framework of stationary wavelet transform, the approximation coefficients at suitable decomposition level is processed by index low-passing filtering by using Fourier transform, after that, the nonlinear transformation is applied to further process the figure to improve the picture contrast. To verify validity of the algorithm, the image enhancement algorithm is applied to infrared testing pictures of two specimens with de-bonding defect. Therein, one specimen is made of a type of high-strength steel, and the other is a type of carbon fiber composite. As the result shown, in the images processed by the image enhancement algorithm presented in the paper, most of noises are eliminated, and contrast between defect areas and normal area is improved greatly; in addition, by using the binary picture of the processed figure, the continuous defect edges can be extracted, all of which show the validity of the algorithm. The paper provides a well-performing image enhancement algorithm for the infrared thermography.
NASA Astrophysics Data System (ADS)
Entekhabi, D.; Jagdhuber, T.; Das, N. N.; Baur, M.; Link, M.; Piles, M.; Akbar, R.; Konings, A. G.; Mccoll, K. A.; Alemohammad, S. H.; Montzka, C.; Kunstmann, H.
2016-12-01
The active-passive soil moisture retrieval algorithm of NASA's SMAP mission depends on robust statistical estimation of active-passive covariation (β) and vegetation structure (Γ) parameters in order to provide reliable global measurements of soil moisture on an intermediate level (9km) compared to the native resolution of the radiometer (36km) and radar (3km) instruments. These parameters apply to the SMAP radiometer-radar combination over the period of record that was cut short with the end of the SMAP radar transmission. They also apply to the current SMAP radiometer and Sentinel 1A/B radar combination for high-resolution surface soil moisture mapping. However, the performance of the statistically-based approach is directly dependent on the selection of a representative time frame in which these parameters can be estimated assuming dynamic soil moisture and stationary soil roughness and vegetation cover. Here, we propose a novel, data-driven and physics-based single-pass retrieval of active-passive microwave covariation and vegetation parameters for the SMAP mission. The algorithm does not depend on time series analyses and can be applied using minimum one pair of an active-passive acquisition. The algorithm stems from the physical link between microwave emission and scattering via conservation of energy. The formulation of the emission radiative transfer is combined with the Distorted Born Approximation of radar scattering for vegetated land surfaces. The two formulations are simultaneously solved for the covariation and vegetation structure parameters. Preliminary results from SMAP active-passive observations (April 13th to July 7th 2015) compare well with the time-series statistical approach and confirms the capability of this method to estimate these parameters. Moreover, the method is not restricted to a given frequency (applies to both L-band and C-band combinations for the radar) or incidence angle (all angles and not just the fixed 40° incidence). Therefore, the approach is applicable to the combination of SMAP and Sentinel-1A/B data for active-passive and high-resolution soil moisture estimation.
Converting Parkinson-Specific Scores into Health State Utilities to Assess Cost-Utility Analysis.
Chen, Gang; Garcia-Gordillo, Miguel A; Collado-Mateo, Daniel; Del Pozo-Cruz, Borja; Adsuar, José C; Cordero-Ferrera, José Manuel; Abellán-Perpiñán, José María; Sánchez-Martínez, Fernando Ignacio
2018-06-07
The aim of this study was to compare the Parkinson's Disease Questionnaire-8 (PDQ-8) with three multi-attribute utility (MAU) instruments (EQ-5D-3L, EQ-5D-5L, and 15D) and to develop mapping algorithms that could be used to transform PDQ-8 scores into MAU scores. A cross-sectional study was conducted. A final sample of 228 evaluable patients was included in the analyses. Sociodemographic and clinical data were also collected. Two EQ-5D questionnaires were scored using Spanish tariffs. Two models and three statistical techniques were used to estimate each model in the direct mapping framework for all three MAU instruments, including the most widely used ordinary least squares (OLS), the robust MM-estimator, and the generalized linear model (GLM). For both EQ-5D-3L and EQ-5D-5L, indirect response mapping based on an ordered logit model was also conducted. Three goodness-of-fit tests were employed to compare the models: the mean absolute error (MAE), the root-mean-square error (RMSE), and the intra-class correlation coefficient (ICC) between the predicted and observed utilities. Health state utility scores ranged from 0.61 (EQ-5D-3L) to 0.74 (15D). The mean PDQ-8 score was 27.51. The correlation between overall PDQ-8 score and each MAU instrument ranged from - 0.729 (EQ-5D-5L) to - 0.752 (EQ-5D-3L). A mapping algorithm based on PDQ-8 items had better performance than using the overall score. For the two EQ-5D questionnaires, in general, the indirect mapping approach had comparable or even better performance than direct mapping based on MAE. Mapping algorithms developed in this study enable the estimation of utility values from the PDQ-8. The indirect mapping equations reported for two EQ-5D questionnaires will further facilitate the calculation of EQ-5D utility scores using other country-specific tariffs.
Estimating forest species abundance through linear unmixing of CHRIS/PROBA imagery
NASA Astrophysics Data System (ADS)
Stagakis, Stavros; Vanikiotis, Theofilos; Sykioti, Olga
2016-09-01
The advancing technology of hyperspectral remote sensing offers the opportunity of accurate land cover characterization of complex natural environments. In this study, a linear spectral unmixing algorithm that incorporates a novel hierarchical Bayesian approach (BI-ICE) was applied on two spatially and temporally adjacent CHRIS/PROBA images over a forest in North Pindos National Park (Epirus, Greece). The scope is to investigate the potential of this algorithm to discriminate two different forest species (i.e. beech - Fagus sylvatica, pine - Pinus nigra) and produce accurate species-specific abundance maps. The unmixing results were evaluated in uniformly distributed plots across the test site using measured fractions of each species derived by very high resolution aerial orthophotos. Landsat-8 images were also used to produce a conventional discrete-type classification map of the test site. This map was used to define the exact borders of the test site and compare the thematic information of the two mapping approaches (discrete vs abundance mapping). The required ground truth information, regarding training and validation of the applied mapping methodologies, was collected during a field campaign across the study site. Abundance estimates reached very good overall accuracy (R2 = 0.98, RMSE = 0.06). The most significant source of error in our results was due to the shadowing effects that were very intense in some areas of the test site due to the low solar elevation during CHRIS acquisitions. It is also demonstrated that the two mapping approaches are in accordance across pure and dense forest areas, but the conventional classification map fails to describe the natural spatial gradients of each species and the actual species mixture across the test site. Overall, the BI-ICE algorithm presented increased potential to unmix challenging objects with high spectral similarity, such as different vegetation species, under real and not optimum acquisition conditions. Its full potential remains to be investigated in further and more complex study sites in view of the upcoming satellite hyperspectral missions.
The subjective importance of noise spectral content
NASA Astrophysics Data System (ADS)
Baxter, Donald; Phillips, Jonathan; Denman, Hugh
2014-01-01
This paper presents secondary Standard Quality Scale (SQS2) rankings in overall quality JNDs for a subjective analysis of the 3 axes of noise, amplitude, spectral content, and noise type, based on the ISO 20462 softcopy ruler protocol. For the initial pilot study, a Python noise simulation model was created to generate the matrix of noise masks for the softcopy ruler base images with different levels of noise, different low pass filter noise bandwidths and different band pass filter center frequencies, and 3 different types of noise: luma only, chroma only, and luma and chroma combined. Based on the lessons learned, the full subjective experiment, involving 27 observers from Google, NVIDIA and STMicroelectronics was modified to incorporate a wider set of base image scenes, and the removal of band pass filtered noise masks to ease observer fatigue. Good correlation was observed with the Aptina subjective noise study. The absence of tone mapping in the noise simulation model visibly reduced the contrast at high levels of noise, due to the clipping of the high levels of noise near black and white. Under the 34-inch viewing distance, no significant difference was found between the luma only noise masks and the combined luma and chroma noise masks. This was not the intuitive expectation. Two of the base images with large uniform areas, `restaurant' and `no parking', were found to be consistently more sensitive to noise than the texture rich scenes. Two key conclusions are (1) there are fundamentally different sensitivities to noise on a flat patch versus noise in real images and (2) magnification of an image accentuates visual noise in a way that is non-representative of typical noise reduction algorithms generating the same output frequency. Analysis of our experimental noise masks applied to a synthetic Macbeth ColorChecker Chart confirmed the color-dependent nature of the visibility of luma and chroma noise.
NASA Astrophysics Data System (ADS)
Chander, Shard; Ganguly, Debojyoti
2017-01-01
Water level was estimated, using AltiKa radar altimeter onboard the SARAL satellite, over the Ukai reservoir using modified algorithms specifically for inland water bodies. The methodology was based on waveform classification, waveform retracking, and dedicated inland range corrections algorithms. The 40-Hz waveforms were classified based on linear discriminant analysis and Bayesian classifier. Waveforms were retracked using Brown, Ice-2, threshold, and offset center of gravity methods. Retracking algorithms were implemented on full waveform and subwaveforms (only one leading edge) for estimating the improvement in the retrieved range. European Centre for Medium-Range Weather Forecasts (ECMWF) operational, ECMWF re-analysis pressure fields, and global ionosphere maps were used to exactly estimate the range corrections. The microwave and optical images were used for estimating the extent of the water body and altimeter track location. Four global positioning system (GPS) field trips were conducted on same day as the SARAL pass using two dual frequency GPS. One GPS was mounted close to the dam in static mode and the other was used on a moving vehicle within the reservoir in Kinematic mode. In situ gauge dataset was provided by the Ukai dam authority for the time period January 1972 to March 2015. The altimeter retrieved water level results were then validated with the GPS survey and in situ gauge dataset. With good selection of virtual station (waveform classification, back scattering coefficient), Ice-2 retracker and subwaveform retracker both work better with an overall root-mean-square error <15 cm. The results support that the AltiKa dataset, due to a smaller foot-print and sharp trailing edge of the Ka-band waveform, can be utilized for more accurate water level information over inland water bodies.
Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision
NASA Astrophysics Data System (ADS)
Rojer, Alan S.; Schwartz, Eric L.
1991-02-01
Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
Image registration with auto-mapped control volumes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreibmann, Eduard; Xing Lei
2006-04-15
Many image registration algorithms rely on the use of homologous control points on the two input image sets to be registered. In reality, the interactive identification of the control points on both images is tedious, difficult, and often a source of error. We propose a two-step algorithm to automatically identify homologous regions that are used as a priori information during the image registration procedure. First, a number of small control volumes having distinct anatomical features are identified on the model image in a somewhat arbitrary fashion. Instead of attempting to find their correspondences in the reference image through user interaction,more » in the proposed method, each of the control regions is mapped to the corresponding part of the reference image by using an automated image registration algorithm. A normalized cross-correlation (NCC) function or mutual information was used as the auto-mapping metric and a limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) was employed to optimize the function to find the optimal mapping. For rigid registration, the transformation parameters of the system are obtained by averaging that derived from the individual control volumes. In our deformable calculation, the mapped control volumes are treated as the nodes or control points with known positions on the two images. If the number of control volumes is not enough to cover the whole image to be registered, additional nodes are placed on the model image and then located on the reference image in a manner similar to the conventional BSpline deformable calculation. For deformable registration, the established correspondence by the auto-mapped control volumes provides valuable guidance for the registration calculation and greatly reduces the dimensionality of the problem. The performance of the two-step registrations was applied to three rigid registration cases (two PET-CT registrations and a brain MRI-CT registration) and one deformable registration of inhale and exhale phases of a lung 4D CT. Algorithm convergence was confirmed by starting the registration calculations from a large number of initial transformation parameters. An accuracy of {approx}2 mm was achieved for both deformable and rigid registration. The proposed image registration method greatly reduces the complexity involved in the determination of homologous control points and allows us to minimize the subjectivity and uncertainty associated with the current manual interactive approach. Patient studies have indicated that the two-step registration technique is fast, reliable, and provides a valuable tool to facilitate both rigid and nonrigid image registrations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serin, E.; Codel, G.; Mabhouti, H.
Purpose: In small field geometries, the electronic equilibrium can be lost, making it challenging for the dose-calculation algorithm to accurately predict the dose, especially in the presence of tissue heterogeneities. In this study, dosimetric accuracy of Monte Carlo (MC) advanced dose calculation and sequential algorithms of Multiplan treatment planning system were investigated for small radiation fields incident on homogeneous and heterogeneous geometries. Methods: Small open fields of fixed cones of Cyberknife M6 unit 100 to 500 mm2 were used for this study. The fields were incident on in house phantom containing lung, air, and bone inhomogeneities and also homogeneous phantom.more » Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. The dosimetric accuracy of MC and sequential algorithms in the presence of the inhomogeneities was compared against EBT3 film dosimetry Results: Open field tests in a homogeneous phantom showed good agreement between two algorithms and film measurement For MC algorithm, the minimum gamma analysis passing rates between measured and calculated dose distributions were 99.7% and 98.3% for homogeneous and inhomogeneous fields in the case of lung and bone respectively. For sequential algorithm, the minimum gamma analysis passing rates were 98.9% and 92.5% for for homogeneous and inhomogeneous fields respectively for used all cone sizes. In the case of the air heterogeneity, the differences were larger for both calculation algorithms. Overall, when compared to measurement, the MC had better agreement than sequential algorithm. Conclusion: The Monte Carlo calculation algorithm in the Multiplan treatment planning system is an improvement over the existing sequential algorithm. Dose discrepancies were observed for in the presence of air inhomogeneities.« less
Two fast approximate wavelet algorithms for image processing, classification, and recognition
NASA Astrophysics Data System (ADS)
Wickerhauser, Mladen V.
1994-07-01
We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.
Technique for Chestband Contour Shape-Mapping in Lateral Impact
Hallman, Jason J; Yoganandan, Narayan; Pintar, Frank A
2011-01-01
The chestband transducer permits noninvasive measurement of transverse plane biomechanical response during blunt thorax impact. Although experiments may reveal complex two-dimensional (2D) deformation response to boundary conditions, biomechanical studies have heretofore employed only uniaxial chestband contour quantifying measurements. The present study described and evaluated an algorithm by which source subject-specific contour data may be systematically mapped to a target generalized anthropometry for computational studies of biomechanical response or anthropomorphic test dummy development. Algorithm performance was evaluated using chestband contour datasets from two rigid lateral impact boundary conditions: Flat wall and anterior-oblique wall. Comparing source and target anthropometry contours, peak deflections and deformation-time traces deviated by less than 4%. These results suggest that the algorithm is appropriate for 2D deformation response to lateral impact boundary conditions. PMID:21676399
Tissue-specific epigenetics in gene neighborhoods: myogenic transcription factor genes
Chandra, Sruti; Terragni, Jolyon; Zhang, Guoqiang; Pradhan, Sriharsa; Haushka, Stephen; Johnston, Douglas; Baribault, Carl; Lacey, Michelle; Ehrlich, Melanie
2015-01-01
Myogenic regulatory factor (MRF) genes, MYOD1, MYOG, MYF6 and MYF5, are critical for the skeletal muscle lineage. Here, we used various epigenome profiles from human myoblasts (Mb), myotubes (Mt), muscle and diverse non-muscle samples to elucidate the involvement of multigene neighborhoods in the regulation of MRF genes. We found more far-distal enhancer chromatin associated with MRF genes in Mb and Mt than previously reported from studies in mice. For the MYF5/MYF6 gene-pair, regions of Mb-associated enhancer chromatin were located throughout the adjacent 236-kb PTPRQ gene even though Mb expressed negligible amounts of PTPRQ mRNA. Some enhancer chromatin regions inside PTPRQ in Mb were also seen in PTPRQ mRNA-expressing non-myogenic cells. This suggests dual-purpose PTPRQ enhancers that upregulate expression of PTPRQ in non-myogenic cells and MYF5/MYF6 in myogenic cells. In contrast, the myogenic enhancer chromatin regions distal to MYOD1 were intergenic and up to 19 kb long. Two of them contain small, known MYOD1 enhancers, and one displayed an unusually high level of 5-hydroxymethylcytosine in a quantitative DNA hydroxymethylation assay. Unexpectedly, three regions of MYOD1-distal enhancer chromatin in Mb and Mt overlapped enhancer chromatin in umbilical vein endothelial cells, which might upregulate a distant gene (PIK3C2A). Lastly, genes surrounding MYOG were preferentially transcribed in Mt, like MYOG itself, and exhibited nearby myogenic enhancer chromatin. These neighboring chromatin regions may be enhancers acting in concert to regulate myogenic expression of multiple adjacent genes. Our findings reveal the very different and complex organization of gene neighborhoods containing closely related transcription factor genes. PMID:26041816
Hubscher, C H; Reed, W R; Kaddumi, E G; Armstrong, J E; Johnson, R D
2010-01-01
The specific white matter location of all the spinal pathways conveying penile input to the rostral medulla is not known. Our previous studies using rats demonstrated the loss of low but not high threshold penile inputs to medullary reticular formation (MRF) neurons after acute and chronic dorsal column (DC) lesions of the T8 spinal cord and loss of all penile inputs after lesioning the dorsal three-fifths of the cord. In the present study, select T8 lesions were made and terminal electrophysiological recordings were performed 45–60 days later in a limited portion of the nucleus reticularis gigantocellularis (Gi) and Gi pars alpha. Lesions included subtotal dorsal hemisections that spared only the lateral half of the dorsal portion of the lateral funiculus on one side, dorsal and over-dorsal hemisections, and subtotal transections that spared predominantly just the ventromedial white matter. Electrophysiological data for 448 single unit recordings obtained from 32 urethane-anaesthetized rats, when analysed in groups based upon histological lesion reconstructions, revealed (1) ascending bilateral projections in the dorsal, dorsolateral and ventrolateral white matter of the spinal cord conveying information from the male external genitalia to MRF, and (2) ascending bilateral projections in the ventrolateral white matter conveying information from the pelvic visceral organs (bladder, descending colon, urethra) to MRF. Multiple spinal pathways from the penis to the MRF may correspond to different functions, including those processing affective/pleasure/motivational, nociception, and mating-specific (such as for erection and ejaculation) inputs. PMID:20142271
Cost analysis for the implementation of a medication review with follow-up service in Spain.
Noain, Aranzazu; Garcia-Cardenas, Victoria; Gastelurrutia, Miguel Angel; Malet-Larrea, Amaia; Martinez-Martinez, Fernando; Sabater-Hernandez, Daniel; Benrimoj, Shalom I
2017-08-01
Background Medication review with follow-up (MRF) is a professional pharmacy service proven to be cost-effective. Its broader implementation is limited, mainly due to the lack of evidence-based implementation programs that include economic and financial analysis. Objective To analyse the costs and estimate the price of providing and implementing MRF. Setting Community pharmacy in Spain. Method Elderly patients using poly-pharmacy received a community pharmacist-led MRF for 6 months. The cost analysis was based on the time-driven activity based costing model and included the provider costs, initial investment costs and maintenance expenses. The service price was estimated using the labour costs, costs associated with service provision, potential number of patients receiving the service and mark-up. Main outcome measures Costs and potential price of MRF. Results A mean time of 404.4 (SD 232.2) was spent on service provision and was extrapolated to annual costs. Service provider cost per patient ranged from €196 (SD 90.5) to €310 (SD 164.4). The mean initial investment per pharmacy was €4594 and the mean annual maintenance costs €3,068. Largest items contributing to cost were initial staff training, continuing education and renting of the patient counselling area. The potential service price ranged from €237 to €628 per patient a year. Conclusion Time spent by the service provider accounted for 75-95% of the final cost, followed by initial investment costs and maintenance costs. Remuneration for professional pharmacy services provision must cover service costs and appropriate profit, allowing for their long-term sustainability.
Fabrication of high precision metallic freeform mirrors with magnetorheological finishing (MRF)
NASA Astrophysics Data System (ADS)
Beier, Matthias; Scheiding, Sebastian; Gebhardt, Andreas; Loose, Roman; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas
2013-09-01
The fabrication of complex shaped metal mirrors for optical imaging is a classical application area of diamond machining techniques. Aspherical and freeform shaped optical components up to several 100 mm in diameter can be manufactured with high precision in an acceptable amount of time. However, applications are naturally limited to the infrared spectral region due to scatter losses for shorter wavelengths as a result of the remaining periodic diamond turning structure. Achieving diffraction limited performance in the visible spectrum demands for the application of additional polishing steps. Magnetorheological Finishing (MRF) is a powerful tool to improve figure and finish of complex shaped optics at the same time in a single processing step. The application of MRF as a figuring tool for precise metal mirrors is a nontrivial task since the technology was primarily developed for figuring and finishing a variety of other optical materials, such as glasses or glass ceramics. In the presented work, MRF is used as a figuring tool for diamond turned aluminum lightweight mirrors with electroless nickel plating. It is applied as a direct follow-up process after diamond machining of the mirrors. A high precision measurement setup, composed of an interferometer and an advanced Computer Generated Hologram with additional alignment features, allows for precise metrology of the freeform shaped optics in short measuring cycles. Shape deviations less than 150 nm PV / 20 nm rms are achieved reliably for freeform mirrors with apertures of more than 300 mm. Characterization of removable and induced spatial frequencies is carried out by investigating the Power Spectral Density.
Siqin, Qimuge; Nishiumi, Tadayuki; Yamada, Takahisa; Wang, Shuiqing; Liu, Wenjun; Wu, Rihan; Borjigin, Gerelt
2017-12-01
The aim of this study was to determine the relationships among muscle fiber-type composition, fiber diameter, and myogenic regulatory factor (MRF) gene expression in different skeletal muscles during development in naturally grazing Wuzhumuqin sheep. Three major muscles (i.e. the Longissimus dorsi (LD), Biceps femoris (BF) and Triceps brachii (TB)) were obtained from 20 Wuzhumuqin sheep and 20 castrated rams at each of the following ages: 1, 3, 6, 9, 12 and 18 months. Muscle fiber-type composition and fiber diameter were measured using histochemistry and morphological analysis, and MRF gene expression levels were determined using real-time PCR. In the LD muscle, changes in the proportion of each of different types of fiber (I, IIA and IIB) were relatively small. In the BF muscle, a higher proportion of type I and a 6.19-fold lower proportion of type IIA fibers were observed (P < 0.05). In addition, the compositions of type I and IIA fibers continuously changed in the TB muscle (P < 0.05). Moreover, muscle diameter gradually increased throughout development (P < 0.05). Almost no significant difference was found in MRF gene expression patterns, which appeared to be relatively stable. These results suggest that changes in fiber-type composition and increases in fiber size may be mutually interacting processes during muscle development. © 2017 The Authors Animal Science Journal published by John Wiley & Sons Australia, Ltd on behalf of Japanese Society of Animal Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J; Lu, B; Yan, G
Purpose: To identify the weakness of dose calculation algorithm in a treatment planning system for volumetric modulated arc therapy (VMAT) and sliding window (SW) techniques using a two-dimensional diode array. Methods: The VMAT quality assurance(QA) was implemented with a diode array using multiple partial arcs that divided from a VMAT plan; each partial arc has the same segments and the original monitor units. Arc angles were less than ± 30°. Multiple arcs delivered through consecutive and repetitive gantry operating clockwise and counterclockwise. The source-toaxis distance setup with the effective depths of 10 and 20 cm were used for a diodemore » array. To figure out dose errors caused in delivery of VMAT fields, the numerous fields having the same segments with the VMAT field irradiated using different delivery techniques of static and step-and-shoot. The dose distributions of the SW technique were evaluated by creating split fields having fine moving steps of multi-leaf collimator leaves. Calculated doses using the adaptive convolution algorithm were analyzed with measured ones with distance-to-agreement and dose difference of 3 mm and 3%.. Results: While the beam delivery through static and step-and-shoot techniques showed the passing rate of 97 ± 2%, partial arc delivery of the VMAT fields brought out passing rate of 85%. However, when leaf motion was restricted less than 4.6 mm/°, passing rate was improved up to 95 ± 2%. Similar passing rate were obtained for both 10 and 20 cm effective depth setup. The calculated doses using the SW technique showed the dose difference over 7% at the final arrival point of moving leaves. Conclusion: Error components in dynamic delivery of modulated beams were distinguished by using the suggested QA method. This partial arc method can be used for routine VMAT QA. Improved SW calculation algorithm is required to provide accurate estimated doses.« less
Principles for problem aggregation and assignment in medium scale multiprocessors
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.
2003-01-01
The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.
Combined fabrication technique for high-precision aspheric optical windows
NASA Astrophysics Data System (ADS)
Hu, Hao; Song, Ci; Xie, Xuhui
2016-07-01
Specifications made on optical components are becoming more and more stringent with the performance improvement of modern optical systems. These strict requirements not only involve low spatial frequency surface accuracy, mid-and-high spatial frequency surface errors, but also surface smoothness and so on. This presentation mainly focuses on the fabrication process for square aspheric window which combines accurate grinding, magnetorheological finishing (MRF) and smoothing polishing (SP). In order to remove the low spatial frequency surface errors and subsurface defects after accurate grinding, the deterministic polishing method MRF with high convergence and stable material removal rate is applied. Then the SP technology with pseudo-random path is adopted to eliminate the mid-and-high spatial frequency surface ripples and high slope errors which is the defect for MRF. Additionally, the coordinate measurement method and interferometry are combined in different phase. Acid-etched method and ion beam figuring (IBF) are also investigated on observing and reducing the subsurface defects. Actual fabrication result indicates that the combined fabrication technique can lead to high machining efficiency on manufaturing the high-precision and high-quality optical aspheric windows.
Locally adaptive MR intensity models and MRF-based segmentation of multiple sclerosis lesions
NASA Astrophysics Data System (ADS)
Galimzianova, Alfiia; Lesjak, Žiga; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga
2015-03-01
Neuroimaging biomarkers are an important paraclinical tool used to characterize a number of neurological diseases, however, their extraction requires accurate and reliable segmentation of normal and pathological brain structures. For MR images of healthy brains the intensity models of normal-appearing brain tissue (NABT) in combination with Markov random field (MRF) models are known to give reliable and smooth NABT segmentation. However, the presence of pathology, MR intensity bias and natural tissue-dependent intensity variability altogether represent difficult challenges for a reliable estimation of NABT intensity model based on MR images. In this paper, we propose a novel method for segmentation of normal and pathological structures in brain MR images of multiple sclerosis (MS) patients that is based on locally-adaptive NABT model, a robust method for the estimation of model parameters and a MRF-based segmentation framework. Experiments on multi-sequence brain MR images of 27 MS patients show that, compared to whole-brain model and compared to the widely used Expectation-Maximization Segmentation (EMS) method, the locally-adaptive NABT model increases the accuracy of MS lesion segmentation.
Registratiom of TM data to digital elevation models
NASA Technical Reports Server (NTRS)
1984-01-01
Several problems arise when attempting to register LANDSAT thematic mapper data to U.S. B Geological Survey digital elevation models (DEMs). The TM data are currently available only in a rotated variant of the Space Oblique Mercator (SOM) map projection. Geometric transforms are thus; required to access TM data in the geodetic coordinates used by the DEMs. Due to positional errors in the TM data, these transforms require some sort of external control. The spatial resolution of TM data exceeds that of the most commonly DEM data. Oversampling DEM data to TM resolution introduces systematic noise. Common terrain processing algorithms (e.g., close computation) compound this problem by acting as high-pass filters.
Li, Jonathan; Samant, Sanjiv
2011-01-01
Two‐dimensional array dosimeters are commonly used to perform pretreatment quality assurance procedures, which makes them highly desirable for measuring transit fluences for in vivo dose reconstruction. The purpose of this study was to determine if an in vivo dose reconstruction via transit dosimetry using a 2D array dosimeter was possible. To test the accuracy of measuring transit dose distribution using a 2D array dosimeter, we evaluated it against the measurements made using ionization chamber and radiochromic film (RCF) profiles for various air gap distances (distance from the exit side of the solid water slabs to the detector distance; 0 cm, 30 cm, 40 cm, 50 cm, and 60 cm) and solid water slab thicknesses (10 cm and 20 cm). The backprojection dose reconstruction algorithm was described and evaluated. The agreement between the ionization chamber and RCF profiles for the transit dose distribution measurements ranged from ‐0.2%~ 4.0% (average 1.79%). Using the backprojection dose reconstruction algorithm, we found that, of the six conformal fields, four had a 100% gamma index passing rate (3%/3 mm gamma index criteria), and two had gamma index passing rates of 99.4% and 99.6%. Of the five IMRT fields, three had a 100% gamma index passing rate, and two had gamma index passing rates of 99.6% and 98.8%. It was found that a 2D array dosimeter could be used for backprojection dose reconstruction for in vivo dosimetry. PACS number: 87.55.N‐
NASA Astrophysics Data System (ADS)
Oda, Hirokuni; Xuan, Chuang
2014-10-01
development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth
Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearlymore » the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth
Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less
Robust estimation of adaptive tensors of curvature by tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung
2005-03-01
Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.
Gripping characteristics of an electromagnetically activated magnetorheological fluid-based gripper
NASA Astrophysics Data System (ADS)
Choi, Young T.; Hartzell, Christine M.; Leps, Thomas; Wereley, Norman M.
2018-05-01
The design and test of a magnetorheological fluid (MRF)-based universal gripper (MR gripper) are presented in this study. The MR gripper was developed to have a simple design, but with the ability to produce reliable gripping and handling of a wide range of simple objects. The MR gripper design consists of a bladder mounted atop an electromagnet, where the bladder is filled with an MRF, which was formulated to have long-term stable sedimentation stability, that was synthesized using a high viscosity linear polysiloxane (HVLP) carrier fluid with a carbonyl iron particle (CIP) volume fraction of 35%. Two bladders were fabricated: a magnetizable bladder using a magnetorheological elastomer (MRE), and a passive (non-magnetizable) silicone rubber bladder. The holding force and applied (initial compression) force of the MR gripper for a bladder fill volume of 75% were experimentally measured, for both magnetizable and passive bladders, using a servohydraulic material testing machine for a range of objects. The gripping performance of the MR gripper using an MRE bladder was compared to that of the MR gripper using a passive bladder.
Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
NASA Astrophysics Data System (ADS)
Bunnoon, Pituk; Chalermyanont, Kusumal; Limsakul, Chusak
2010-02-01
This paper proposed the discrete transform and neural network algorithms to obtain the monthly peak load demand in mid term load forecasting. The mother wavelet daubechies2 (db2) is employed to decomposed, high pass filter and low pass filter signals from the original signal before using feed forward back propagation neural network to determine the forecasting results. The historical data records in 1997-2007 of Electricity Generating Authority of Thailand (EGAT) is used as reference. In this study, historical information of peak load demand(MW), mean temperature(Tmean), consumer price index (CPI), and industrial index (economic:IDI) are used as feature inputs of the network. The experimental results show that the Mean Absolute Percentage Error (MAPE) is approximately 4.32%. This forecasting results can be used for fuel planning and unit commitment of the power system in the future.
Parallel and fault-tolerant algorithms for hypercube multiprocessors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykanat, C.
1988-01-01
Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less
Combined distributed and concentrated transducer network for failure indication
NASA Astrophysics Data System (ADS)
Ostachowicz, Wieslaw; Wandowski, Tomasz; Malinowski, Pawel
2010-03-01
In this paper algorithm for discontinuities localisation in thin panels made of aluminium alloy is presented. Mentioned algorithm uses Lamb wave propagation methods for discontinuities localisation. Elastic waves were generated and received using piezoelectric transducers. They were arranged in concentrated arrays distributed on the specimen surface. In this way almost whole specimen could be monitored using this combined distributed-concentrated transducer network. Excited elastic waves propagate and reflect from panel boundaries and discontinuities existing in the panel. Wave reflection were registered through the piezoelectric transducers and used in signal processing algorithm. Proposed processing algorithm consists of two parts: signal filtering and extraction of obstacles location. The first part was used in order to enhance signals by removing noise from them. Second part allowed to extract features connected with wave reflections from discontinuities. Extracted features damage influence maps were a basis to create damage influence maps. Damage maps indicated intensity of elastic wave reflections which corresponds to obstacles coordinates. Described signal processing algorithms were implemented in the MATLAB environment. It should be underlined that in this work results based only on experimental signals were presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badkul, R; Pokhrel, D; Jiang, H
2016-06-15
Purpose: Intra-fractional tumor motion due to respiration may potentially compromise dose delivery for SBRT of lung tumors. Even sufficient margins are used to ensure there is no geometric miss of target volume, there is potential dose blurring effect may present due to motion and could impact the tumor coverage if motions are larger. In this study we investigated dose blurring effect of open fields as well as Lung SBRT patients planned using 2 non-coplanar dynamic conformal arcs(NCDCA) and few conformal beams(CB) calculated with Monte Carlo (MC) based algorithm utilizing phantom with 2D-diode array(MapCheck) and ion-chamber. Methods: SBRT lung patients weremore » planned on Brainlab-iPlan system using 4D-CT scan and ITV were contoured on MIP image set and verified on all breathing phase image sets to account for breathing motion and then 5mm margin was applied to generate PTV. Plans were created using two NCDCA and 4-5 CB 6MV photon calculated using XVMC MC-algorithm. 3 SBRT patients plans were transferred to phantom with MapCheck and 0.125cc ion-chamber inserted in the middle of phantom to calculate dose. Also open field 3×3, 5×5 and 10×10 were calculated on this phantom. Phantom was placed on motion platform with varying motion from 5, 10, 20 and 30 mm with duty cycle of 4 second. Measurements were carried out for open fields as well 3 patients plans at static and various degree of motions. MapCheck planar dose and ion-chamber reading were collected and compared with static measurements and computed values to evaluate the dosimetric effect on tumor coverage due to motion. Results: To eliminate complexity of patients plan 3 simple open fields were also measured to see the dose blurring effect with the introduction of motion. All motion measured ionchamber values were normalized to corresponding static value. For open fields 5×5 and 10×10 normalized central axis ion-chamber values were 1.00 for all motions but for 3×3 they were 1 up to 10mm motion and 0.97 and 0.87 for 20 and 30mm motion respectively. For SBRT plans central axis dose values were within 1% upto 10mm motions but decreased to average of 5% for 20mm and 8% for 30mm motion. Mapcheck comparison with static showed penumbra enlargement due to motion blurring at the edges of the field for 3×3,5×5,10×10 pass rates were 88% to 12%, 100% to 43% and 100% to 63% respectively as motion increased from 5 to 30mm. For SBRT plans MapCheck mean pass rate were decreased from 73.8% to 39.5% as motion increased from 5mm to 30mm. Conclusion: Dose blurring effect has been seen in open fields as well as SBRT lung plans using NCDCA with CB which worsens with increasing respiratory motion and decreasing field size(tumor size). To reduce this effect larger margins and appropriate motion reduction techniques should be utilized.« less
Simplifying and speeding the management of intra-node cache coherence
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY
2012-04-17
A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
Managing coherence via put/get windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumrich, Matthias A; Chen, Dong; Coteus, Paul W
A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an areamore » of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.« less
The Role of Nanodiamonds in the Polishing Zone During Magnetorheological Finishing (MRF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeGroote, J.E.; Marino, A.E.; WIlson, J.P.
2008-01-07
In this work we discuss the role that nanodiamond abrasives play in magnetorheological finishing. We hypothesize that, as the nanodiamond MR fluid is introduced to the magnetic field, the micron sized spherical carbonyl iron (CI) particles are pulled down towards the rotating wheel, leaving a thin layer of nanodiamonds at the surface of the stiffened MR fluid ribbon. Our experimental results shown here support this hypothesis. We also show that surface roughness values inside MRF spots show a strong correlation with the near surface mechanical properties of the glass substrates and with drag force.
Magnetic field effects on shear and normal stresses in magnetorheological finishing.
Lambropoulos, John C; Miao, Chunlin; Jacobs, Stephen D
2010-09-13
We use a recent experimental technique to measure in situ shear and normal stresses during magnetorheological finishing (MRF) of a borosilicate glass over a range of magnetic fields. At low fields shear stresses increase with magnetic field, but become field-independent at higher magnetic fields. Micromechanical models of formation of magnetic particle chains suggest a complex behavior of magnetorheological (MR) fluids that combines fluid- and solid-like responses. We discuss the hypothesis that, at higher fields, slip occurs between magnetic particle chains and the immersed glass part, while the normal stress is governed by the MRF ribbon elasticity.
Analysis of a new phase and height algorithm in phase measurement profilometry
NASA Astrophysics Data System (ADS)
Bian, Xintian; Zuo, Fen; Cheng, Ju
2018-04-01
Traditional phase measurement profilometry adopts divergent illumination to obtain the height distribution of a measured object accurately. However, the mapping relation between reference plane coordinates and phase distribution must be calculated before measurement. Data are then stored in a computer in the form of a data sheet for standby applications. This study improved the distribution of projected fringes and deducted the phase-height mapping algorithm when the two pupils of the projection and imaging systems are of unequal heights and when the projection and imaging axes are on different planes. With the algorithm, calculating the mapping relation between reference plane coordinates and phase distribution prior to measurement is unnecessary. Thus, the measurement process is simplified, and the construction of an experimental system is made easy. Computer simulation and experimental results confirm the effectiveness of the method.
Algorithms for optimization of branching gravity-driven water networks
NASA Astrophysics Data System (ADS)
Dardani, Ian; Jones, Gerard F.
2018-05-01
The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-02-01
In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.
Sensitivity in error detection of patient specific QA tools for IMRT plans
NASA Astrophysics Data System (ADS)
Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.
2016-03-01
The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.
Stereo-vision-based terrain mapping for off-road autonomous navigation
NASA Astrophysics Data System (ADS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-05-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-01-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Gene-network inference by message passing
NASA Astrophysics Data System (ADS)
Braunstein, A.; Pagnani, A.; Weigt, M.; Zecchina, R.
2008-01-01
The inference of gene-regulatory processes from gene-expression data belongs to the major challenges of computational systems biology. Here we address the problem from a statistical-physics perspective and develop a message-passing algorithm which is able to infer sparse, directed and combinatorial regulatory mechanisms. Using the replica technique, the algorithmic performance can be characterized analytically for artificially generated data. The algorithm is applied to genome-wide expression data of baker's yeast under various environmental conditions. We find clear cases of combinatorial control, and enrichment in common functional annotations of regulated genes and their regulators.
Boyer, Nicole R S; Miller, Sarah; Connolly, Paul; McIntosh, Emma
2016-04-01
The Strengths and Difficulties Questionnaire (SDQ) is a behavioural screening tool for children. The SDQ is increasingly used as the primary outcome measure in population health interventions involving children, but it is not preference based; therefore, its role in allocative economic evaluation is limited. The Child Health Utility 9D (CHU9D) is a generic preference-based health-related quality of-life measure. This study investigates the applicability of the SDQ outcome measure for use in economic evaluations and examines its relationship with the CHU9D by testing previously published mapping algorithms. The aim of the paper is to explore the feasibility of using the SDQ within economic evaluations of school-based population health interventions. Data were available from children participating in a cluster randomised controlled trial of the school-based roots of empathy programme in Northern Ireland. Utility was calculated using the original and alternative CHU9D tariffs along with two SDQ mapping algorithms. t tests were performed for pairwise differences in utility values from the preference-based tariffs and mapping algorithms. Mean (standard deviation) SDQ total difficulties and prosocial scores were 12 (3.2) and 8.3 (2.1). Utility values obtained from the original tariff, alternative tariff, and mapping algorithms using five and three SDQ subscales were 0.84 (0.11), 0.80 (0.13), 0.84 (0.05), and 0.83 (0.04), respectively. Each method for calculating utility produced statistically significantly different values except the original tariff and five SDQ subscale algorithm. Initial evidence suggests the SDQ and CHU9D are related in some of their measurement properties. The mapping algorithm using five SDQ subscales was found to be optimal in predicting mean child health utility. Future research valuing changes in the SDQ scores would contribute to this research.
Interval data clustering using self-organizing maps based on adaptive Mahalanobis distances.
Hajjar, Chantal; Hamdan, Hani
2013-10-01
The self-organizing map is a kind of artificial neural network used to map high dimensional data into a low dimensional space. This paper presents a self-organizing map for interval-valued data based on adaptive Mahalanobis distances in order to do clustering of interval data with topology preservation. Two methods based on the batch training algorithm for the self-organizing maps are proposed. The first method uses a common Mahalanobis distance for all clusters. In the second method, the algorithm starts with a common Mahalanobis distance per cluster and then switches to use a different distance per cluster. This process allows a more adapted clustering for the given data set. The performances of the proposed methods are compared and discussed using artificial and real interval data sets. Copyright © 2013 Elsevier Ltd. All rights reserved.
On Feature Extraction from Large Scale Linear LiDAR Data
NASA Astrophysics Data System (ADS)
Acharjee, Partha Pratim
Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are presented. Significant power demand is located in urban areas, where, theoretically, a large amount of building surface area is also available for solar panel installation. Therefore, property owners and power generation companies can benefit from a citywide solar potential map, which can provide available estimated annual solar energy at a given location. An efficient solar potential measurement is a prerequisite for an effective solar energy system in an urban area. In addition, the solar potential calculation from rooftops and building facades could open up a wide variety of options for solar panel installations. However, complex urban scenes make it hard to estimate the solar potential, partly because of shadows cast by the buildings. LiDAR-based 3D city models could possibly be the right technology for solar potential mapping. Although, most of the current LiDAR-based local solar potential assessment algorithms mainly address rooftop potential calculation, whereas building facades can contribute a significant amount of viable surface area for solar panel installation. In this paper, we introduce a new algorithm to calculate solar potential of both rooftop and building facades. Solar potential received by the rooftops and facades over the year are also investigated in the test area.