Does preprocessing change nonlinear measures of heart rate variability?
Gomes, Murilo E D; Guimarães, Homero N; Ribeiro, Antônio L P; Aguirre, Luis A
2002-11-01
This work investigated if methods used to produce a uniformly sampled heart rate variability (HRV) time series significantly change the deterministic signature underlying the dynamics of such signals and some nonlinear measures of HRV. Two methods of preprocessing were used: the convolution of inverse interval function values with a rectangular window and the cubic polynomial interpolation. The HRV time series were obtained from 33 Wistar rats submitted to autonomic blockade protocols and from 17 healthy adults. The analysis of determinism was carried out by the method of surrogate data sets and nonlinear autoregressive moving average modelling and prediction. The scaling exponents alpha, alpha(1) and alpha(2) derived from the detrended fluctuation analysis were calculated from raw HRV time series and respective preprocessed signals. It was shown that the technique of cubic interpolation of HRV time series did not significantly change any nonlinear characteristic studied in this work, while the method of convolution only affected the alpha(1) index. The results suggested that preprocessed time series may be used to study HRV in the field of nonlinear dynamics.
Review of image processing fundamentals
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1985-01-01
Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol; Padgett, Curtis; Zhu, David; Lung, Gerald; Howard, Ayanna
2000-01-01
A three-dimensional microelectronic device (3DANN-R) capable of performing general image convolution at the speed of 1012 operations/second (ops) in a volume of less than 1.5 cubic centimeter has been successfully built under the BMDO/JPL VIGILANTE program. 3DANN-R was developed in partnership with Irvine Sensors Corp., Costa Mesa, California. 3DANN-R is a sugar-cube-sized, low power image convolution engine that in its core computation circuitry is capable of performing 64 image convolutions with large (64x64) windows at video frame rates. This paper explores potential applications of 3DANN-R such as target recognition, SAR and hyperspectral data processing, and general machine vision using real data and discuss technical challenges for providing deployable systems for BMDO surveillance and interceptor programs.
Scattering theory for the radial H˙1/2-critical wave equation with a cubic convolution
NASA Astrophysics Data System (ADS)
Miao, Changxing; Zhang, Junyong; Zheng, Jiqiang
2015-12-01
In this paper, we study the global well-posedness and scattering for the wave equation with a cubic convolution ∂t2u - Δu = ± (| x | - 3 *| u | 2) u in dimensions d ≥ 4. We prove that if the radial solution u with life-span I obeys (u ,ut) ∈ Lt∞ (I H˙x 1 / 2 (Rd) × H˙x-1/2 (Rd)), then u is global and scatters. By the strategy derived from concentration compactness, we show that the proof of the global well-posedness and scattering is reduced to disprove the existence of two scenarios: soliton-like solution and high to low frequency cascade. Making use of the No-waste Duhamel formula and double Duhamel trick, we deduce that these two scenarios enjoy the additional regularity by the bootstrap argument of [7]. This together with virial analysis implies the energy of such two scenarios is zero and so we get a contradiction.
Solvability of a Nonlinear Integral Equation in Dynamical String Theory
NASA Astrophysics Data System (ADS)
Khachatryan, A. Kh.; Khachatryan, Kh. A.
2018-04-01
We investigate an integral equation of the convolution type with a cubic nonlinearity on the entire real line. This equation has a direct application in open-string field theory and in p-adic string theory and describes nonlocal interactions. We prove that there exists a one-parameter family of bounded monotonic solutions and calculate the limits of solutions constructed at infinity.
40 CFR 49.137 - Rule for air pollution episodes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... continue or reoccur over the next 24 hours. (A) Particulate matter (PM10): 350 micrograms per cubic meter, 24-hour average; (B) Carbon monoxide (CO): 17 milligrams per cubic meter (15 ppm), 8-hour average; (C) Sulfur dioxide (SO2): 800 micrograms per cubic meter (0.3 ppm), 24-hour average; (D) Ozone (O3): 400...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shyer, E.B.
The New York State Development of Environmental Conservation`s Division of Mineral Resources is responsible for regulating the oil and gas industry and receiving operator`s annual well production reports. Production year 1970 and 627 active gas wells with reported production of 3 billion cubic feet by New York State operators. Ten years later in 1980, production had more than tripled to 15.5 billion cubic feet and reported active gas wells increased to 1,966. During 1990, reported gas production was 25 billion cubic feet from 5,536 active gas wells. The average production per gas well in 1970 was 4,773 thousand cubic feet.more » Average gas production per well peaked in 1978 with a reported production of 14 billion cubic feet by 1,431 active gas wells which averaged 9,821 thousand cubic feet per well. By 1994 the average production per well had decreased to 3,800 thousand cubic feet, a decrease of approximately 60%. The decrease in average well production is more a reflection of the majority of older wells reaching the lower end of their decline curve than a decrease in overall per well production. The number of completed gas wells increased following the rising price of gas. In 1970 gas was $0.30 per thousand cubic feet. By 1984 the price per thousand cubic feet had peaked at $4. After 1984 the price of gas started to decline while the number of active gas wells continued to increase. Sharp increases in gas production for certain counties such as Steuben in 1972 and 1973 and Chautauqua in 1980-83 reflects the discoveries of new fields such as Adrian Reef and Bass Island, respectively. The Stagecoach Field discovered in 1989 in Tioga County is the newest high producing field in New York State.« less
NASA Astrophysics Data System (ADS)
Wang, Jinliang; Wu, Xuejiao
2010-11-01
Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.
Chong, Ketpin; Deng, Yuru
2012-01-01
Biological membranes are generally perceived as phospholipid bilayer structures that delineate in a lamellar form the cell surface and intracellular organelles. However, much more complex and highly convoluted membrane organizations are ubiquitously present in many cell types under certain types of stress, states of disease, or in the course of viral infections. Their occurrence under pathological conditions make such three-dimensionally (3D) folded and highly ordered membranes attractive biomarkers. They have also stimulated great biomedical interest in understanding the molecular basis of their formation. Currently, the analysis of such membrane arrangements, which include tubulo-reticular structures (TRS) or cubic membranes of various subtypes, is restricted to electron microscopic methods, including tomography. Preservation of membrane structures during sample preparation is the key to understand their true 3D nature. This chapter discusses methods for appropriate sample preparations to successfully examine and analyze well-preserved highly ordered membranes by electron microscopy. Processing methods and analysis conditions for green algae (Zygnema sp.) and amoeba (Chaos carolinense), mammalian cells in culture and primary tissue cells are described. We also discuss methods to identify cubic membranes by transmission electron microscopy (TEM) with the aid of a direct template matching method and by computer simulation. A 3D analysis of cubic cell membrane topology by electron tomography is described as well as scanning electron microscopy (SEM) to investigate surface contours of isolated mitochondria with cubic membrane arrangement. Copyright © 2012 Elsevier Inc. All rights reserved.
40 CFR 49.125 - Rule for limiting the emissions of particulate matter.
Code of Federal Regulations, 2010 CFR
2010-07-01
... used exclusively for space heating with a rated heat input capacity of less than 400,000 British... average of 0.23 grams per dry standard cubic meter (0.1 grains per dry standard cubic foot), corrected to... boiler stack must not exceed an average of 0.46 grams per dry standard cubic meter (0.2 grains per dry...
Alcoholism Detection by Data Augmentation and Convolutional Neural Network with Stochastic Pooling.
Wang, Shui-Hua; Lv, Yi-Ding; Sui, Yuxiu; Liu, Shuai; Wang, Su-Jing; Zhang, Yu-Dong
2017-11-17
Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.
Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.
Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D
2008-05-01
The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.
Convolutional neural network for road extraction
NASA Astrophysics Data System (ADS)
Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong
2017-11-01
In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.
Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.
Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S
2010-03-01
This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, S.; Paschal, C.B.; Galloway, R.L.
Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less
4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties
NASA Astrophysics Data System (ADS)
Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.
2018-05-01
4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.
Whole stand volume tables for quaking aspen in the Rocky Mountains
Wayne D. Shepperd; H. Todd Mowrer
1984-01-01
Linear regression equations were developed to predict stand volumes for aspen given average stand basal area and average stand height. Tables constructed from these equations allow easy field estimation of gross merchantable cubic and board foot Scribner Rules per acre, and cubic meters per hectare using simple prism cruise data.
Towards dropout training for convolutional neural networks.
Wu, Haibing; Gu, Xiaodong
2015-11-01
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Combined coding and delay-throughput analysis for fading channels of mobile satellite communications
NASA Technical Reports Server (NTRS)
Wang, C. C.; Yan, Tsun-Yee
1986-01-01
This paper presents the analysis of using the punctured convolutional code with Viterbi decoding to improve communications reliability. The punctured code rate is optimized so that the average delay is minimized. The coding gain in terms of the message delay is also defined. Since using punctured convolutional code with interleaving is still inadequate to combat the severe fading for short packets, the use of multiple copies of assignment and acknowledgment packets is suggested. The performance on the average end-to-end delay of this protocol is analyzed. It is shown that a replication of three copies for both assignment packets and acknowledgment packets is optimum for the cases considered.
Code of Federal Regulations, 2011 CFR
2011-07-01
... limits HMIWI size Small Medium Large Averaging time 1 Methodfor demonstrating compliance 2 Particulate matter Milligrams per dry standard cubic meter (grains per dry standard cubic foot) 69 (0.03) 34 (0.015.../furans (grains per billion dry standard cubic feet) or nanograms per dry standard cubic meter TEQ (grains...
Code of Federal Regulations, 2011 CFR
2011-07-01
... HMIWI size Small Medium Large Averaging time 1 Method fordemonstrating compliance 2 Particulate matter Milligrams per dry standard cubic meter (grains per dry standard cubic foot) 66 (0.029) 22 (0.0095) 18 (0.../furans (grains per billion dry standard cubic feet) or nanograms per dry standard cubic meter TEQ (grains...
Fine-resolution voxel S values for constructing absorbed dose distributions at variable voxel size.
Dieudonné, Arnaud; Hobbs, Robert F; Bolch, Wesley E; Sgouros, George; Gardin, Isabelle
2010-10-01
This article presents a revised voxel S values (VSVs) approach for dosimetry in targeted radiotherapy, allowing dose calculation for any voxel size and shape of a given SPECT or PET dataset. This approach represents an update to the methodology presented in MIRD pamphlet no. 17. VSVs were generated in soft tissue with a fine spatial sampling using the Monte Carlo (MC) code MCNPX for particle emissions of 9 radionuclides: (18)F, (90)Y, (99m)Tc, (111)In, (123)I, (131)I, (177)Lu, (186)Re, and (201)Tl. A specific resampling algorithm was developed to compute VSVs for desired voxel dimensions. The dose calculation was performed by convolution via a fast Hartley transform. The fine VSVs were calculated for cubic voxels of 0.5 mm for electrons and 1.0 mm for photons. Validation studies were done for (90)Y and (131)I VSV sets by comparing the revised VSV approach to direct MC simulations. The first comparison included 20 spheres with different voxel sizes (3.8-7.7 mm) and radii (4-64 voxels) and the second comparison a hepatic tumor with cubic voxels of 3.8 mm. MC simulations were done with MCNPX for both. The third comparison was performed on 2 clinical patients with the 3D-RD (3-Dimensional Radiobiologic Dosimetry) software using the EGSnrc (Electron Gamma Shower National Research Council Canada)-based MC implementation, assuming a homogeneous tissue-density distribution. For the sphere model study, the mean relative difference in the average absorbed dose was 0.20% ± 0.41% for (90)Y and -0.36% ± 0.51% for (131)I (n = 20). For the hepatic tumor, the difference in the average absorbed dose to tumor was 0.33% for (90)Y and -0.61% for (131)I and the difference in average absorbed dose to the liver was 0.25% for (90)Y and -1.35% for (131)I. The comparison with the 3D-RD software showed an average voxel-to-voxel dose ratio between 0.991 and 0.996. The calculation time was below 10 s with the VSV approach and 50 and 15 h with 3D-RD for the 2 clinical patients. This new VSV approach enables the calculation of absorbed dose based on a SPECT or PET cumulated activity map, with good agreement with direct MC methods, in a faster and more clinically compatible manner.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
NASA Technical Reports Server (NTRS)
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong
2018-03-01
In the development of treatments for cardiovascular diseases, short axis cardiac cine MRI is important for the assessment of various structural and functional properties of the heart. In short axis cardiac cine MRI, Cardiac properties including the ventricle dimensions, stroke volume, and ejection fraction can be extracted based on accurate segmentation of the left ventricle (LV) myocardium. One of the most advanced segmentation methods is based on fully convolutional neural networks (FCN) and can be successfully used to do segmentation in cardiac cine MRI slices. However, the temporal dependency between slices acquired at neighboring time points is not used. Here, based on our previously proposed FCN structure, we proposed a new algorithm to segment LV myocardium in porcine short axis cardiac cine MRI by incorporating convolutional long short-term memory (Conv-LSTM) to leverage the temporal dependency. In this approach, instead of processing each slice independently in a conventional CNN-based approach, the Conv-LSTM architecture captures the dynamics of cardiac motion over time. In a leave-one-out experiment on 8 porcine specimens (3,600 slices), the proposed approach was shown to be promising by achieving average mean Dice similarity coefficient (DSC) of 0.84, Hausdorff distance (HD) of 6.35 mm, and average perpendicular distance (APD) of 1.09 mm when compared with manual segmentations, which improved the performance of our previous FCN-based approach (average mean DSC=0.84, HD=6.78 mm, and APD=1.11 mm). Qualitatively, our model showed robustness against low image quality and complications in the surrounding anatomy due to its ability to capture the dynamics of cardiac motion.
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.
Two-dimensional convolute integers for analytical instrumentation
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1982-01-01
As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.
Soler-López, Luis R.; Santos, Carlos R.
2010-01-01
Laguna Grande is a 50-hectare lagoon in the municipio of Fajardo, located in the northeasternmost part of Puerto Rico. Hydrologic, water-quality, and biological data were collected in the lagoon between March 2007 and February 2009 to establish baseline conditions and determine the health of Laguna Grande on the basis of preestablished standards. In addition, a core of bottom material was obtained at one site within the lagoon to establish sediment depositional rates. Water-quality properties measured onsite (temperature, pH, dissolved oxygen, specific conductance, and water transparency) varied temporally rather than areally. All physical properties were in compliance with current regulatory standards established for Puerto Rico. Nutrient concentrations were very low and in compliance with current regulatory standards (less than 5.0 and 1.0 milligrams per liter for total nitrogen and total phosphorus, respectively). The average total nitrogen concentration was 0.28 milligram per liter, and the average total phosphorus concentration was 0.02 milligram per liter. Chlorophyll a was the predominant form of photosynthetic pigment in the water. The average chlorophyll-a concentration was 6.2 micrograms per liter. Bottom sediment accumulation rates were determined in sediment cores by modeling the downcore activities of lead-210 and cesium-137. Results indicated a sediment depositional rate of about 0.44 centimeter per year. At this rate of sediment accretion, the lagoon may become a marshland in about 700 to 900 years. About 86 percent of the community primary productivity in Laguna Grande was generated by periphyton, primarily algal mats and seagrasses, and the remaining 14 percent was generated by phytoplankton in the water column. Based on the diel studies the total average net community productivity equaled 5.7 grams of oxygen per cubic meter per day (2.1 grams of carbon per cubic meter per day). Most of this productivity was ascribed to periphyton and macrophytes, which produced 4.9 grams of oxygen per cubic meter per day (1.8 grams of carbon per cubic meter per day). Phytoplankton, the plant and algal component of plankton, produced about 0.8 gram of oxygen per cubic meter per day (0.3 gram of carbon per cubic meter per day). The total diel community respiration rate was 23.4 grams of oxygen per cubic meter per day. The respiration rate ascribed to plankton, which consists of all free floating and swimming organisms in the water column, composed 10 percent of this rate (2.9 grams of oxygen per cubic meter per day); respiration by all other organisms composed the remaining 90 percent (20.5 grams of oxygen per cubic meter per day). Plankton gross productivity was 3.7 grams of oxygen per cubic meter per day, equivalent to about 13 percent of the average gross productivity for the entire community (29.1 grams of oxygen per cubic meter per day). The average phytoplankton biomass values in Laguna Grande ranged from 6.0 to 13.6 milligrams per liter. During the study, Laguna Grande contained a phytoplankton standing crop of approximately 5.8 metric tons. Phytoplankton community had a turnover (renewal) rate of about 153 times per year, or roughly about once every 2.5 days. Fecal indicator bacteria concentrations ranged from 160 to 60,000 colonies per 100 milliliters. Concentrations generally were greatest in areas near residential and commercial establishments, and frequently exceeded current regulatory standards established for Puerto Rico.
Hedgecock, T. Scott
1999-01-01
A two-dimensional finite-element surface-water model was used to study the effects of U.S. Highway 231 and the proposed Montgomery Outer Loop on the water-surface elevations and flow distributions during flooding in the Catoma Creek and Little Catoma Creek Basins southeast of Montgomery, Montgomery County, Alabama. The effects of flooding were simulated for two scenarios--existing and proposed conditions--for the 100- and 500-year recurrence intervals. The first scenario was to model the existing bridge and highway configuration for U.S. Highway 231 and the existing ponds that lie just upstream from this crossing. The second scenario was to model the proposed bridge and highway configuration for the Montgomery Outer Loop and the Montgomery Loop Interchange at U.S. Highway 231 as well as the proposed modifications to the ponds upstream. Simulation of floodflow for Little Catoma Creek for the existing conditions at U.S. Highway 231 indicates that, for the 100-year flood, 54 percent of the flow (8,140 cubic feet per second) was conveyed by the northernmost bridge, 21 percent (3,130 cubic feet per second) by the middle bridge, and 25 percent (3,780 cubic feet per second) by the southernmost bridge. No overtopping of U.S. Highway 231 occurred. However, the levees of the catfish ponds immediately upstream from the crossing were completely overtopped. The average water- surface elevations for the 100-year flood at the upstream limits of the study reach for Catoma Creek and Little Catoma Creek were 216.9 and 218.3 feet, respectively. For the 500-year flood, the simulatin indicates that 51 percent of the flow (11,200 cubic feet per second) was conveyed by the northernmost bridge, 25 percent (5,480 cubic feet per second) by the middle bridge, and 24 percent (5,120 cubic feet per second) by the southernmost bridge. The average water0surface elevations for the 500-year flood at the upstream limits of the study reach for Catoma Creek and Little Catoma Creek were 218.2 and 219.5 feet, respectively. For the 500-year flood, no overtopping of U.S. Highway 231 occurred. Simulation of the 100-year floodflow for Little Catoma Creek for the proposed conditions indicates that, for the existing bridges on U.S. Highway 231, 54 percent of the flow (8,190 cubic feet per second) was conveyed by the northernmost bridge, 22 percent (3,350 cubic feet per second) by the middle bridge, and 24 percent (3,490 cubic feet per second) by the southernmost bridge. The two proposed relief bridges on the Montgomery Outer Loop upstream from the proposed remaining catfish ponds conveyed about 7,750 cubic feet per second (3,400 cubic feet per second for the west relief bridge and 4,350 cubic feet per second for the east relief bridge) with an average depth of flow of about 7 feet. The average water-surface elevation at the upstream limit of the study reach for Little Catoma Creek was 218.8 feet, which is about 0.5 foot higher than the average water-surface elevation for the existing conditions. For the 100-year flood, there was no overtopping of either U.S. Highway 231 or the Montgomery Outer Loop. However, the levees of the proposed remaining catfish ponds were completely overtopped. For the Montgomery Outer Loop crossing of Catoma Creek, simulation of the 100-year floodflow indicates that about 58 percent of the flow (14,100 cubic feet per second) was conveyed by the proposed main channel bridge and 42 percent (10,200 cubic feet per second) by the proposed relief bridge. The average water-surface elevation at the upstream limit of the study reach for Catoma Creek was 216.9 feet, which is the same as the water-surface elevation for the existing conditions. Results of model simulations for the 500-year flood for the proposed conditions indicate that there was no overtopping on either U.S. Highway 231 or the Montgomery Outer Loop. For the existing bridges on U.S. Highway 231, 42 percent of the flow (11,300 cubic feet per second) was conveyed by the northernmost bridge
Limnology of Laguna Tortuguero, Puerto Rico
Quinones-Marquez, Ferdinand; Fuste, Luis A.
1978-01-01
The principal chemical, physical and biological characteristics, and the hydrology of Laguna Tortuguero, Puerto Rico, were studied from 1974-75. The lagoon, with an area of 2.24 square kilometers and a volume of about 2.68 million cubic meters, contains about 5 percent of seawater. Drainage through a canal on the north side averages 0.64 cubic meters per second per day, flushing the lagoon about 7.5 times per year. Chloride and sodium are the principal ions in the water, ranging from 300 to 700 mg/liter and 150 to 400 mg/liter, respectively. Among the nutrients, nitrogen averages about 1.7 mg/liter, exceeding phosphorus in a weight ratio of 170:1. About 10 percent of the nitrogen and 40 percent of the phosphorus entering the lagoon is retained. The bottom sediments, with a volume of about 4.5 million cubic meters, average 0.8 and 0.014 percent nitrogen and phosphorus, respectively. (Woodard-USGS)
Drewes, P.A.; Conrads, P.A.
1995-01-01
The assimilative capacities of selected reaches of the Waccamaw River and the Atlantic Intracoastal Waterway near Myrtle Beach, South Carolina, were determined using results from water-quality simulations by the Branched Lagrangian Transport Model. The study area included tidally influenced sections of the Waccamaw River, the Pee Dee River, Bull Creek, and the Atlantic Intracoastal Waterway. Hydrodynamic data for the Branched Lagrangian Transport Model were simulated using the U.S. Geological Survey BRANCH one-dimensional unsteady- flow model. Assimilative capacities were determined for four locations using low-, medium-, and high- flow conditions and the average dissolved-oxygen concentration for a 7-day period. Results indicated that for the Waccamaw River near Conway, the ultimate oxygen demand is 370 to 6,740 pounds per day for 7-day average streamflows of 17 to 1,500 cubic feet per second. For the Waccamaw River at Bucksport, the ultimate oxygen demand is 580 to 7,300 pounds per day for 7-day average streamflows of 62 to 1,180 cubic feet per second. For the Atlantic Intracoastal Waterway near North Myrtle Beach, simulations indicate ultimate oxygen demand is 5,100 to 10,000 pounds per day for 7-day average streamflows of 110 to 465 cubic feet per second. The ultimate oxygen demand for the Waccamaw River near Murrells Inlet is 11,000 to 230,000 pounds per day for 7-day average streamflows of 2,240 to 13,700 cubic feet per second.
Sedimentation survey of Lago Cerrillos, Ponce, Puerto Rico, April-May 2008
Soler-López, Luis R.
2011-01-01
Lago Cerrillos dam, located in the municipality of Ponce in southern Puerto Rico, was constructed in 1991 as part of the multipurpose Rio Portugues and Bucana Project. This project provides flood protection, water supply, and recreation facilities for the municipio of Ponce. The reservoir had an original storage capacity of 38.03 million cubic meters at maximum conservation pool elevation of 174.65 meters above mean sea level and a drainage area of 45.32 square kilometers. Sedimentation in Lago Cerrillos reservoir has reduced the storage capacity from 38.03 million cubic meters in 1991 to 37.26 million cubic meters in 2008, which represents a total storage loss of about 2 percent. During July 29 to August 23, 2002, 8,492 cubic meters of sediment were removed from the Rio Cerrillos mouth of the reservoir. Taking into account this removed material, the total water-storage loss as of 2008 is 778,492 cubic meters, and the long-term annual water-storage capacity loss rate is about 45,794 cubic meters per year or about 0.12 percent per year. The Lago Cerrillos net sediment-contributing drainage area has an average sediment yield of about 1,069 cubic meters per square kilometer per year. Sediment accumulation in Lago Cerrillos is not uniformly distributed and averages about 3 meters in thickness. This represents a sediment deposition rate of about 18 centimeters per year. On the basis of the 2008 reservoir storage capacity of 37.26 million cubic meters per year and a long-term sedimentation rate of 45,794 cubic meters per year, Lago Cerrillos is estimated to have a useful life of about 814 years or until the year 2822.
NASA Astrophysics Data System (ADS)
Karasawa, Kenichi; Oda, Masahiro; Hayashi, Yuichiro; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Rueckert, Daniel; Mori, Kensaku
2015-03-01
Abdominal organ segmentations from CT volumes are now widely used in the computer-aided diagnosis and surgery assistance systems. Among abdominal organs, the pancreas is especially difficult to segment because of its large individual differences of the shape and position. In this paper, we propose a new pancreas segmentation method from 3D abdominal CT volumes using patient-specific weighted-subspatial probabilistic atlases. First of all, we perform normalization of organ shapes in training volumes and an input volume. We extract the Volume Of Interest (VOI) of the pancreas from the training volumes and an input volume. We divide each training VOI and input VOI into some cubic regions. We use a nonrigid registration method to register these cubic regions of the training VOI to corresponding regions of the input VOI. Based on the registration results, we calculate similarities between each cubic region of the training VOI and corresponding region of the input VOI. We select cubic regions of training volumes having the top N similarities in each cubic region. We subspatially construct probabilistic atlases weighted by the similarities in each cubic region. After integrating these probabilistic atlases in cubic regions into one, we perform a rough-to-precise segmentation of the pancreas using the atlas. The results of the experiments showed that utilization of the training volumes having the top N similarities in each cubic region led good results of the pancreas segmentation. The Jaccard Index and the average surface distance of the result were 58.9% and 2.04mm on average, respectively.
Distributions of Trace Gases and Aerosols during the Dry Biomass Burning Season in Southern Africa
NASA Technical Reports Server (NTRS)
Sinha, Parikhit; Hobbs, Peter V.; Yokelson, Robert J.; Blake, Donald R.; Gao, Song; Kirchstetter, Thomas W.
2003-01-01
Vertical profiles in the lower troposphere of temperature, relative humidity, sulfur dioxide (SO2), ozone (O3), condensation nuclei (CN), and carbon monoxide (CO), and horizontal distributions of twenty gaseous and particulate species, are presented for five regions of southern Africa during the dry biomass burning season of 2000. The regions are the semiarid savannas of northeast South Africa and northern Botswana, the savanna-forest mosaic of coastal Mozambique, the humid savanna of southern Zambia, and the desert of western Namibia. The highest average concentrations of carbon dioxide (CO2), CO, methane (CH4), O3, black particulate carbon, and total particulate carbon were in the Botswana and Zambia sectors (388 and 392 ppmv, 369 and 453 ppbv, 1753 and 1758 ppbv, 79 and 88 ppbv, 2.6 and 5.5 micrograms /cubic meter and 13.2 and 14.3 micrograms/cubic meter). This was due to intense biomass burning in Zambia and surrounding regions. The South Africa sector had the highest average concentrations of SO2, sulfate particles, and CN (5.1 ppbv, 8.3 micrograms/cubic meter, and per 6400 cubic meter , respectively), which derived from biomass burning and electric generation plants and mining operations within this sector. Air quality in the Mozambique sector was similar to the neighboring South Africa sector. Over the arid Namibia sector there were polluted layers aloft, in which average SO2, O3, and CO mixing ratios (1.2 ppbv, 76 ppbv, and 3 10 ppbv, respectively) were similar to those measured over the other more polluted sectors. This was due to transport of biomass smoke from regions of widespread savanna burning in southern Angola. Average concentrations over all sectors of CO2 (386 +/- 8 ppmv), CO (261 +/- 81 ppbv), SO2 (2.5 +/- 1.6 ppbv), O3 (64 +/- 13 ppbv), black particulate carbon (2.3 +/- 1.9 microgram/cubic meter), organic particulate carbon (6.2 +/- 5.2 microgram/cubic meter), total particle mass (26.0 +/- 4.7 microgram/cubic meter), and potassium particles (0.4 +- 0.1 microgram/cubic meter) were comparable to those in polluted, urban air. Since the majority of the measurements in this study were obtained in locations well removed from industrial sources of pollution, the high average concentrations of pollutants reflect the effects of widespread biomass burning. On occasions, relatively thin (-0.5 km) layers of remarkably clean air were located at -3 km above mean sea level, sandwiched between heavily polluted air. The data presented here can be used for inputs to and validation of regional and global atmospheric chemical models.
Soler-López, Luis R.; Gómez-Gómez, Fernando; Rodríguez-Martínez, Jesús
2005-01-01
The Laguna de Las Salinas is a shallow, 35-hectare, hypersaline lagoon (depth less than 1 meter) in the municipio of Ponce, located on the southern coastal plain of Puerto Rico. Hydrologic, water-quality, and biological data in the lagoon were collected between January 2003 and September 2004 to establish baseline conditions. During the study period, rainfall was about 1,130 millimeters, with much of the rain recorded during three distinct intense events. The lagoon is connected to the sea by a shallow, narrow channel. Subtle tidal changes, combined with low rainfall and high evaporation rates, kept the lagoon at salinities above that of the sea throughout most of the study. Water-quality properties measured on-site (temperature, pH, dissolved oxygen, specific conductance, and Secchi disk transparency) exhibited temporal rather than spatial variations and distribution. Although all physical parameters were in compliance with current regulatory standards for Puerto Rico, hyperthermic and hypoxic conditions were recorded during isolated occasions. Nutrient concentrations were relatively low and in compliance with current regulatory standards (less than 5.0 and 1.0 milligrams per liter for total nitrogen and total phosphorus, respectively). The average total nitrogen concentration was 1.9 milligrams per liter and the average total phosphorus concentration was 0.4 milligram per liter. Total organic carbon concentrations ranged from 12.0 to 19.0 milligrams per liter. Chlorophyll a was the predominant form of photosynthetic pigment in the water. The average chlorophyll a concentration was 13.4 micrograms per liter. Chlorophyll b was detected (detection limits 0.10 microgram per liter) only twice during the study. About 90 percent of the primary productivity in the Laguna de Las Salinas was generated by periphyton such as algal mats and macrophytes such as seagrasses. Of the average net productivity of 13.6 grams of oxygen per cubic meter per day derived from the diel study, the periphyton and macrophyes produced 12.3 grams per cubic meter per day; about 1.3 grams (about 10 percent) were produced by the phytoplankton (plant and algae component of plankton). The total respiration rate was 59.2 grams of oxygen per cubic meter per day. The respiration rate ascribed to the plankton (all organisms floating through the water column) averaged about 6.2 grams of oxygen per cubic meter per day (about 10 percent), whereas the respiration rate by all other organisms averaged 53.0 grams of oxygen per cubic meter per day (about 90 percent). Plankton gross productivity was 7.5 grams per cubic meter per day; the gross productivity of the entire community averaged 72.8 grams per cubic meter per day. Fecal coliform bacteria counts were generally less than 200 colonies per 100 milliliters; the highest concentration was 600 colonies per 100 milliliters.
Cubic-foot tree volumes and product recoveries for eastern redcedar in the Ozarks
Leland F. Hanks
1979-01-01
Tree volume tables and equations for eastern redcedar are presented for gross volume, cant volume, and volume of sawmill residue. These volumes, when multiplied by the average value per cubic foot of cants and residue, provide a way to estimate tree value.
NASA Astrophysics Data System (ADS)
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.
Robust hepatic vessel segmentation using multi deep convolution network
NASA Astrophysics Data System (ADS)
Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei
2017-03-01
Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.
Hydrology of the Little Androscoggin River Valley aquifer, Oxford County, Maine
Morrissey, D.J.
1983-01-01
The Little Androscoggin River valley aquifer, a 15-square-mile sand and gravel valley-fill aquifer in southwestern Maine, is the source of water for the towns of Norway, Oxford, and South Paris. Estimated inflows to the aquifer during the 1981 water year were 16.4 cubic feet per second from precipitation directly on the aquifer, 11.2 cubic feet per second from till covered uplands adjacent to the aquifer, and 1.4 cubic feet per second from surface-water leakage. Outflows from the aquifer were 26.7 cubic feet per second to surface water and 2.3 cubic feet per second to wells. A finite-difference ground-water flow model was used to simulate conditions observed in the aquifer during 1981. Model conditions observed in the aquifer during 1981. Model simulations indicate that a 50 percent reduction of average 1981 recharge to the aquifer would cause water level declines of up to 20 feet in some areas. Model simulations of increased pumping at a high yield well in the northern part of the aquifer indicate that resulting changes in the water table will not be sufficient to intercept groundwater contaminated by a sludge disposal site. Water in the aquifer is low in dissolved solids (average for 38 samples was 67 mg/L), slightly acidic and soft. Ground-water contamination has occurred near a sludge-disposal site and in the vicinity of a sanitary landfill. Dissolved solids in ground water near the sludge disposal site were as much as ten times greater than average background values for the aquifer. (USGS)
Investigation of Methods to Eliminate Voltage Delay in Li/SOCl2 Cells.
1980-05-01
of storage at 550C the surface was completely covered with cubic crystals averaging about 8 pim on an edge (Figure 24). The lithium surface stored at...completely covered with cubic crystals, showing no smooth undercoating at all (Figure 25). The average crystal diameter was approximately 3.3 pim , with a...used. ithi timl acgu ire0d A Ci yst al I me surt ace, oil dr n torage at elevated temlper aI Ite t. T[he out lace, showed ch[ ott ine by EPAC except
NASA Astrophysics Data System (ADS)
Fonseca, Pablo; Mendoza, Julio; Wainer, Jacques; Ferrer, Jose; Pinto, Joseph; Guerrero, Jorge; Castaneda, Benjamin
2015-03-01
Breast parenchymal density is considered a strong indicator of breast cancer risk and therefore useful for preventive tasks. Measurement of breast density is often qualitative and requires the subjective judgment of radiologists. Here we explore an automatic breast composition classification workflow based on convolutional neural networks for feature extraction in combination with a support vector machines classifier. This is compared to the assessments of seven experienced radiologists. The experiments yielded an average kappa value of 0.58 when using the mode of the radiologists' classifications as ground truth. Individual radiologist performance against this ground truth yielded kappa values between 0.56 and 0.79.
Experimental study of digital image processing techniques for LANDSAT data
NASA Technical Reports Server (NTRS)
Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.
1976-01-01
The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.
NASA Astrophysics Data System (ADS)
Dormer, James D.; Halicek, Martin; Ma, Ling; Reilly, Carolyn M.; Schreibmann, Eduard; Fei, Baowei
2018-02-01
Cardiovascular disease is a leading cause of death in the United States. The identification of cardiac diseases on conventional three-dimensional (3D) CT can have many clinical applications. An automated method that can distinguish between healthy and diseased hearts could improve diagnostic speed and accuracy when the only modality available is conventional 3D CT. In this work, we proposed and implemented convolutional neural networks (CNNs) to identify diseased hears on CT images. Six patients with healthy hearts and six with previous cardiovascular disease events received chest CT. After the left atrium for each heart was segmented, 2D and 3D patches were created. A subset of the patches were then used to train separate convolutional neural networks using leave-one-out cross-validation of patient pairs. The results of the two neural networks were compared, with 3D patches producing the higher testing accuracy. The full list of 3D patches from the left atrium was then classified using the optimal 3D CNN model, and the receiver operating curves (ROCs) were produced. The final average area under the curve (AUC) from the ROC curves was 0.840 +/- 0.065 and the average accuracy was 78.9% +/- 5.9%. This demonstrates that the CNN-based method is capable of distinguishing healthy hearts from those with previous cardiovascular disease.
Eye and sheath folds in turbidite convolute lamination: Aberystwyth Grits Group, Wales
NASA Astrophysics Data System (ADS)
McClelland, H. L. O.; Woodcock, N. H.; Gladstone, C.
2011-07-01
Eye and sheath folds are described from the turbidites of the Aberystwyth Group, in the Silurian of west Wales. They have been studied at outcrop and on high resolution optical scans of cut surfaces. The folds are not tectonic in origin. They occur as part of the convolute-laminated interval of each sand-mud turbidite bed. The thickness of this interval is most commonly between 20 and 100 mm. Lamination patterns confirm previous interpretations that convolute lamination nucleated on ripples and grew during continued sedimentation of the bed. The folds amplified vertically and were sheared horizontally by continuing turbidity flow, but only to average values of about γ = 1. The strongly curvilinear fold hinges are due not to high shear strains, but to nucleation on sinuous or linguoid ripples. The Aberystwyth Group structures provide a warning that not all eye folds in sedimentary or metasedimentary rocks should be interpreted as sections through high shear strain sheath folds.
Zhang, Jianhua; Li, Sunan; Wang, Rubin
2017-01-01
In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.
3D and 4D magnetic susceptibility tomography based on complex MR images
Chen, Zikuan; Calhoun, Vince D
2014-11-11
Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.
Estimating the board foot to cubic foot ratio
Steve P. Verrill; Victoria L. Herian; Henry N. Spelter
2004-01-01
Certain issues in recent softwood lumber trade negotiations have centered on the method for converting estimates of timber volumes reported in cubic meters to board feet. Such conversions depend on many factors; three of the most important of these are log length, diameter, and taper. Average log diameters vary by region and have declined in the western United States...
Streamflow from the United States into the Atlantic Ocean during 1931-1960
Bue, Conrad D.
1970-01-01
Streamflow from the United States into the Atlantic Ocean, between the international stream St. Croix River, inclusive, and Cape Sable, Fla., averaged about 355,000 cfs (cubic feet per second) during the 30-year period 1931-60, or roughly 20 percent of the water that, on the average flows out of the conterminous United States. The area drained by streams flowing into the Atlantic Ocean is about 288,000 square miles, including the Canadian part of the St. Croix and Connecticut River basins, or a little less than 10 percent of the area of the conterminous United States. Hence, the average streamflow into the Atlantic Ocean, in terms of cubic feet per second per square mile, is about twice the national average of the flow that leaves the conterminous United States. Flow from about three-fourths of the area draining into the Atlantic Ocean is gaged at streamflow measuring stations of the U.S. Geological Survey. The remaining one-fourth of the drainage area consists mostly of low-lying coastal areas from which the flow was estimated, largely on the basis of nearby gaging stations. Streamflow, in terms of cubic feet per second per square mile, decreases rather progressively from north to south. It averages nearly 2 cfs along the Maine coast, about 1 cfs along the North Carolina coast, and about 0.9 cfs along the Florida coast.
Overview of surface-water resources at the U.S. Coast Guard Support Center Kodiak, Alaska, 1987-89
Solin, G.L.
1996-01-01
Hydrologic data at a U.S. Coast Guard Support Center on Kodiak Island, Alaska, were collected from 1987 though 1989 to determine hydrologic conditions and if contamination of soils, ground water, or surface water has occurred. This report summarizes the surface-water-discharge data collected during the study and estimates peak, average, and low-flow values for Buskin River near its mouth. Water-discharge measurements were made at least once at 48 sites on streams in or near the Center. Discharges were measured in the Buskin River near its mouth five times during 1987-89 and ranged from 27 to 367 cubic feet per second. Tributaries of Buskin River below Buskin Lake that had discharges greater than 1 cubic foot per second include Bear Creek, Alder Creek, Magazine Creek, Devils Creek and an outlet from Lake Louise. Streams having flows generally greater than 0.1 cubic foot per second but less than 1 cubic foot per second include an unnamed tributary to Buskin River, an unnamed tributary to Lake Catherine and a drainage channel at Kodiak airport. Most other streams flowing into Buskin River, and all streams on Nyman Peninsula, usually had little or no flow except during periods of rainfall or snowmelt. During a low-flow period in February 1989, discharge measurements in Buskin River and its tributaries indicate that three reaches of Buskin River below Buskin Lake lost water to the ground-water system, whereas two reaches gained water; the net gain in streamflow attributed to ground-water inflow at a location near the mouth was estimated to be 2.2 cubic feet per second. The 100-year peak flow for Buskin River near its mouth was estimated to be 4,460 cubic feet per second. Average discharge was estimated to be 125 cubic feet per second and the 7-day 10-year low flow was estimated to be 5.8 cubic feet per second.
Sabokrou, Mohammad; Fayyaz, Mohsen; Fathy, Mahmood; Klette, Reinhard
2017-02-17
This paper proposes a fast and reliable method for anomaly detection and localization in video data showing crowded scenes. Time-efficient anomaly localization is an ongoing challenge and subject of this paper. We propose a cubicpatch- based method, characterised by a cascade of classifiers, which makes use of an advanced feature-learning approach. Our cascade of classifiers has two main stages. First, a light but deep 3D auto-encoder is used for early identification of "many" normal cubic patches. This deep network operates on small cubic patches as being the first stage, before carefully resizing remaining candidates of interest, and evaluating those at the second stage using a more complex and deeper 3D convolutional neural network (CNN). We divide the deep autoencoder and the CNN into multiple sub-stages which operate as cascaded classifiers. Shallow layers of the cascaded deep networks (designed as Gaussian classifiers, acting as weak single-class classifiers) detect "simple" normal patches such as background patches, and more complex normal patches are detected at deeper layers. It is shown that the proposed novel technique (a cascade of two cascaded classifiers) performs comparable to current top-performing detection and localization methods on standard benchmarks, but outperforms those in general with respect to required computation time.
Liu, Yan; Stojadinovic, Strahinja; Hrycushko, Brian; Wardak, Zabi; Lau, Steven; Lu, Weiguo; Yan, Yulong; Jiang, Steve B; Zhen, Xin; Timmerman, Robert; Nedzi, Lucien; Gu, Xuejun
2017-01-01
Accurate and automatic brain metastases target delineation is a key step for efficient and effective stereotactic radiosurgery (SRS) treatment planning. In this work, we developed a deep learning convolutional neural network (CNN) algorithm for segmenting brain metastases on contrast-enhanced T1-weighted magnetic resonance imaging (MRI) datasets. We integrated the CNN-based algorithm into an automatic brain metastases segmentation workflow and validated on both Multimodal Brain Tumor Image Segmentation challenge (BRATS) data and clinical patients' data. Validation on BRATS data yielded average DICE coefficients (DCs) of 0.75±0.07 in the tumor core and 0.81±0.04 in the enhancing tumor, which outperformed most techniques in the 2015 BRATS challenge. Segmentation results of patient cases showed an average of DCs 0.67±0.03 and achieved an area under the receiver operating characteristic curve of 0.98±0.01. The developed automatic segmentation strategy surpasses current benchmark levels and offers a promising tool for SRS treatment planning for multiple brain metastases.
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Localization of lung fields in HRCT images using a deep convolution neural network
NASA Astrophysics Data System (ADS)
Kumar, Abhishek; Agarwala, Sunita; Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Nandi, Debashis; Garg, Mandeep; Khandelwal, Niranjan; Kalra, Naveen
2018-02-01
Lung field segmentation is a prerequisite step for the development of a computer-aided diagnosis system for interstitial lung diseases observed in chest HRCT images. Conventional methods of lung field segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on lung HRCT images with dense and diffused pathology. An efficient prepro- cessing could improve the accuracy of segmentation of pathological lung field in HRCT images. In this paper, a convolution neural network is used for localization of lung fields in HRCT images. The proposed method provides an optimal bounding box enclosing the lung fields irrespective of the presence of diffuse pathology. The performance of the proposed algorithm is validated on 330 lung HRCT images obtained from MedGift database on ZF and VGG networks. The model achieves a mean average precision of 0.94 with ZF net and a slightly better performance giving a mean average precision of 0.95 in case of VGG net.
Tissue classification and segmentation of pressure injuries using convolutional neural networks.
Zahia, Sofia; Sierra-Sosa, Daniel; Garcia-Zapirain, Begonya; Elmaghraby, Adel
2018-06-01
This paper presents a new approach for automatic tissue classification in pressure injuries. These wounds are localized skin damages which need frequent diagnosis and treatment. Therefore, a reliable and accurate systems for segmentation and tissue type identification are needed in order to achieve better treatment results. Our proposed system is based on a Convolutional Neural Network (CNN) devoted to performing optimized segmentation of the different tissue types present in pressure injuries (granulation, slough, and necrotic tissues). A preprocessing step removes the flash light and creates a set of 5x5 sub-images which are used as input for the CNN network. The network output will classify every sub-image of the validation set into one of the three classes studied. The metrics used to evaluate our approach show an overall average classification accuracy of 92.01%, an average total weighted Dice Similarity Coefficient of 91.38%, and an average precision per class of 97.31% for granulation tissue, 96.59% for necrotic tissue, and 77.90% for slough tissue. Our system has been proven to make recognition of complicated structures in biomedical images feasible. Copyright © 2018 Elsevier B.V. All rights reserved.
A 10-year analysis of South Carolina's industrial timber products output
Richard L. Welch; Thomas R. Bellamy
1979-01-01
The output of industrial timber products in South Carolina increased at an average annual rate of 2 percent between 1967 and 1976. Output from roundwood increased by 36 million cubic feet, while the output from plant byproducts increased 47 million cubic feet. Pulpwood was the leading roundwood product in the State throughout the period, followed by saw logs, and then...
Photoluminescence study of ZnS and ZnS:Pb nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virpal,, E-mail: virpalsharma.sharma@gmail.com; Hastir, Anita; Kaur, Jasmeet
2015-05-15
Photoluminescence (PL) study of pure and 5wt. % lead doped ZnS prepared by co-precipitation method was conducted at room temperature. The prepared nanoparticles were characterized by X-ray Diffraction (XRD), UV-Visible (UV-Vis) spectrophotometer, Photoluminescence (PL) and Raman spectroscopy. XRD patterns confirm cubic structure of ZnS and PbS in doped sample. The band gap energy value increased in case of Pb doped ZnS nanoparticles. The PL spectrum of pure ZnS was de-convoluted into two peaks centered at 399nm and 441nm which were attributed to defect states of ZnS. In doped sample, a shoulder peak at 389nm and a broad peak centered atmore » 505nm were observed. This broad green emission peak originated due to Pb activated ZnS states.« less
Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season
NASA Technical Reports Server (NTRS)
Blonski, Slawomir
2006-01-01
This presentation focuses on spatial resolution characterization for QuickBird panochromatic images in 2003-2004 and presents data measurements and analysis of SSC edge target deployment and edge response extraction and modeling. The results of the characterization are shown as values of the Modulation Transfer Function (MTF) at the Nyquist spatial frequency and as the Relative Edge Response (RER) components. The results show that RER is much less sensitive to accuracy of the curve fitting than the value of MTF at Nyquist frequency. Therefore, the RER/edge response slope is a more robust estimator of the digital image spatial resolution than the MTF. For the QuickBird panochromatic images, the RER is consistently equal to 0.5 for images processed with the Cubic Convolution resampling and to 0.8 for the MTF resampling.
Convolutional neural networks for vibrational spectroscopic data analysis.
Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena
2017-02-15
In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.
Electrically conductive material
Singh, Jitendra P.; Bosak, Andrea L.; McPheeters, Charles C.; Dees, Dennis W.
1993-01-01
An electrically conductive material for use in solid oxide fuel cells, electrochemical sensors for combustion exhaust, and various other applications possesses increased fracture toughness over available materials, while affording the same electrical conductivity. One embodiment of the sintered electrically conductive material consists essentially of cubic ZrO.sub.2 as a matrix and 6-19 wt. % monoclinic ZrO.sub.2 formed from particles having an average size equal to or greater than about 0.23 microns. Another embodiment of the electrically conductive material consists essentially at cubic ZrO.sub.2 as a matrix and 10-30 wt. % partially stabilized zirconia (PSZ) formed from particles having an average size of approximately 3 microns.
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.
Conversion of board foot scaled logs to cubic meters in Washington State, 1970–1998
Henry Spelter
2002-01-01
The conversion factor generally used to convert logs measured in board feet to cubic meters has traditionally been set at 4.53. Because of diminishing old growth, large diameter trees, the average conversion factor has risen, as illustrated in this analysis of Washington state sawmill data over the period 1970â1998. Conversion factors for coastal and interior...
Influence of Bank Afforestation and Snag Angle-of-fall on Riparian Large Woody Debris Recruitment
Don C. Bragg; Jeffrey L. Kershner
2002-01-01
A riparian large woody debris (LWD) recruitment simulator (Coarse Woody Debris [CWD]) was used to test the impact of bank afforestation and snag fall direction on delivery trends. Combining all cumulative LWD recruitment across bank afforestation levels averaged 77.1 cubic meters per 100 meter reach (both banks forested) compared to 49.3 cubic meters per 100 meter...
Annual replenishment of bed material by sediment transport in the Wind River near Riverton, Wyoming
Smalley, M.L.; Emmett, W.W.; Wacker, A.M.
1994-01-01
The U.S. Geological Survey, in cooperation with the Wyoming Department of Transportation, conducted a study during 1985-87 to determine the annual replenishment of sand and gravel along a point bar in the Wind River near Riverton, Wyoming. Hydraulic- geometry relations determined from streamflow measurements; streamflow characteristics determined from 45 years of record at the study site; and analyses of suspended-sediment, bedload, and bed- material samples were used to describe river transport characteristics and to estimate the annual replenishment of sand and gravel. The Wind River is a perennial, snowmelt-fed stream. Average daily discharge at the study site is about 734 cubic feet per second, and bankfull discharge (recurrence interval about 1.5 years) is about 5,000 cubic feet per second. At bankfull discharge, the river is about 136 feet wide and has an average depth of about 5.5 feet and average velocity of about 6.7 feet per second. Streams slope is about 0.0010 foot per foot. Bed material sampled on the point bar before the 1986 high flows ranged from sand to cobbles, with a median diameter of about 22 millimeters. Data for sediment samples collected during water year 1986 were used to develop regression equations between suspended-sediment load and water discharge and between bedload and water discharge. Average annual suspended-sediment load was computed to be about 561,000 tons per year using the regression equation in combination with flow-duration data. The regression equation for estimating bedload was not used; instead, average annual bedload was computed as 1.5 percent of average annual suspended load about 8,410 tons per year. This amount of bedload material is estimated to be in temporary storage along a reach containing seven riffles--a length of approximately 1 river mile. On the basis of bedload material sampled during the 1986 high flows, about 75 percent (by weight) is sand (2 millimeters in diameter or finer); median particle size is about 0.5 milli- meter. About 20 percent (by weight) is medium gravel to small cobbles--12.7 millimeters (0.5 inch) or coarser. The bedload moves slowly (about 0.03 percent of the water speed) and briefly (about 10 percent of the time). The average travel distance of a median-sized particle is about 1 river mile per year. The study results indicate that the average replenishment rate of bedload material coarser than 12.7 millimeters is about 1,500 to 2,000 tons (less than 1,500 cubic yards) per year. Finer material (0.075 to 6.4 millimeters in diameter) is replen- ishment at about 4,500 to 5,000 cubic yards per year. The total volume of potentially usable material would average about 6,000 cubic yards per year.
NASA Astrophysics Data System (ADS)
Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo
2017-03-01
Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.
A Mathematical Motivation for Complex-Valued Convolutional Networks.
Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur
2016-05-01
A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.
Weed Growth Stage Estimator Using Deep Convolutional Neural Networks.
Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl; Mathiassen, Solvejg Kopp; Somerville, Gayle J; Jørgensen, Rasmus Nyholm
2018-05-16
This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.
Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification
NASA Astrophysics Data System (ADS)
Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi
2017-03-01
In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information.
Yarn-dyed fabric defect classification based on convolutional neural network
NASA Astrophysics Data System (ADS)
Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing
2017-09-01
Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.
NASA Technical Reports Server (NTRS)
Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)
1993-01-01
On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.
Sedimentation Survey of Lago El Guineo, Puerto Rico, October 2001
Soler-López, Luis R.
2003-01-01
Lago El Guineo has lost about 17.5 percent of its original storage capacity in 70 years because of sediment accumulation. The water volume has been reduced from 2.29 million cubic meters in 1931, to 2.03 million cubic meters in 1986, and to 1.89 million cubic meters in 2001. The average annual storage-capacity loss (equal to the sedimentation rate) of Lago El Guineo was 4,727 cubic meters for the period of 1931 to July 1986 (or 0.21 percent per year), increasing to 5,714 cubic meters for the period of 1931 to October 2001 (or 0.25 percent per year). Discrepancies that could lead to substantial errors in volume calculations in a small reservoir like Lago El Guineo, were found when transferring the field-collected data into the geographic information system data base 1:20,000 U.S. Geological Survey Jayuya, Puerto Rico quadrangle. After verification and validation of field data, the Lago El Guineo shoreline was rectified using digital aerial photographs and differential global positioning data.
Progress report of southeastern monazite exploration, 1952
Overstreet, W.C.; Theobald, P.K.; White, A.M.; Cuppels, N.P.; Caldwell, D.W.; Whitlow, J.W.
1953-01-01
Reconnaissance of placer monazite during the field season of 1952 covered 6,600 square miles drained by streams in the western Piedmont of Virginia 5 North Carolina, South Carolina,, and Georgia. Emphasis during this investigation was placed on the area between the Savannah River at the border of South Carolina and Georgia and the Catawba River in North Carolina because it contains most of the placers formerly mined for monaziteo Four other areas along the strike of the monazite-bearing crystalline rocks were also studied, They center around Mt. Airy, N.C., Athens, Ga. Griffin, Ga. and LaGrange, Ga. In the Savannah River Catawba River district, studies indicate that even the highest grade stream deposits of more than 10 million cubic yards of alluvium contain less than 1 pound of monazite per cubic yard. The average grade of the better deposits is about 0 0 5 pound of monazite per cubic yard. Only trace amounts of niobium, tantalum, and tin have been detected in the placers. Tungsten is absent. Locally gold adds a few cents per cubic yard to the value of placer ground. The best deposits range in size from 1 to 5 million cubic yards and contain 1 to 2 pounds of monazite to the cubic yard. Hundreds of placers smaller than 1 million cubic yards exceed 2 pounds of monazite to the cubic yard and locally attain an average of 10 pounds Monazite deposits around Athens, Ga., are similar to the smaller deposits in the central part of the Savannah River - Catawba River district. A few small very low-grade monazite placers were found near Mt. Airy, N.C., Griffin, Ga., and LaGrange Ga., but they are of no economic value. The larger the flood plain and the farther it lies from the source of the stream, the lower is the monazite content of the sediment. Monazite cannot be profitably mined .from the crystalline rocks in the five areas. The alluvial placers are in stream sediments of post-Wisconsin age. Some pre-Wisconsin terrace gravel of small areal extent is exposed but it contains only a small amount of monazite Pre-Wisconsin to early post-Wisconsin colluvial sediments locally contain 2 pounds of monazite to the cubic yard. Mode of presentation of reports covering field work during the seasons of 1951 - 52 is given. No further reconnaissance will be undertaken, in the western monazite belt.
John R. Brooks
2007-01-01
A technique for estimating stand average dominant height based solely on field inventory data is investigated. Using only 45.0919 percent of the largest trees per acre in the diameter distribution resulted in estimates of average dominant height that were within 4.3 feet of the actual value, when averaged over stands of very different structure and history. Cubic foot...
Electrically conductive material
Singh, J.P.; Bosak, A.L.; McPheeters, C.C.; Dees, D.W.
1993-09-07
An electrically conductive material is described for use in solid oxide fuel cells, electrochemical sensors for combustion exhaust, and various other applications possesses increased fracture toughness over available materials, while affording the same electrical conductivity. One embodiment of the sintered electrically conductive material consists essentially of cubic ZrO[sub 2] as a matrix and 6-19 wt. % monoclinic ZrO[sub 2] formed from particles having an average size equal to or greater than about 0.23 microns. Another embodiment of the electrically conductive material consists essentially at cubic ZrO[sub 2] as a matrix and 10-30 wt. % partially stabilized zirconia (PSZ) formed from particles having an average size of approximately 3 microns. 8 figures.
XBRL: The New Language of Corporate Financial Reporting
ERIC Educational Resources Information Center
Lester, Wanda F.
2007-01-01
In its purest form, accounting is a method of communication, and many refer to it as the language of business. Although the average citizen might view accounting as a convoluted set of complex standards, the recent abuses of data have resulted in legislation and investor demands for timely and relevant information. In addition, global requirements…
Stojadinovic, Strahinja; Hrycushko, Brian; Wardak, Zabi; Lau, Steven; Lu, Weiguo; Yan, Yulong; Jiang, Steve B.; Zhen, Xin; Timmerman, Robert; Nedzi, Lucien
2017-01-01
Accurate and automatic brain metastases target delineation is a key step for efficient and effective stereotactic radiosurgery (SRS) treatment planning. In this work, we developed a deep learning convolutional neural network (CNN) algorithm for segmenting brain metastases on contrast-enhanced T1-weighted magnetic resonance imaging (MRI) datasets. We integrated the CNN-based algorithm into an automatic brain metastases segmentation workflow and validated on both Multimodal Brain Tumor Image Segmentation challenge (BRATS) data and clinical patients' data. Validation on BRATS data yielded average DICE coefficients (DCs) of 0.75±0.07 in the tumor core and 0.81±0.04 in the enhancing tumor, which outperformed most techniques in the 2015 BRATS challenge. Segmentation results of patient cases showed an average of DCs 0.67±0.03 and achieved an area under the receiver operating characteristic curve of 0.98±0.01. The developed automatic segmentation strategy surpasses current benchmark levels and offers a promising tool for SRS treatment planning for multiple brain metastases. PMID:28985229
DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.
Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou
2016-07-07
In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.
High Fidelity Simulation of Transcritical Liquid Jet in Crossflow
NASA Astrophysics Data System (ADS)
Li, Xiaoyi; Soteriou, Marios
2017-11-01
Transcritical injection of liquid fuel occurs in many practical applications such as diesel, rocket and gas turbine engines. In these applications, the liquid fuel, with a supercritical pressure and a subcritical temperature, is introduced into an environment where both the pressure and temperature exceeds the critical point of the fuel. The convoluted physics of the transition from subcritical to supercritical conditions poses great challenges for both experimental and numerical investigations. In this work, numerical simulation of a binary system of a subcritical liquid injecting into a supercritical gaseous crossflow is performed. The spatially varying fluid thermodynamic and transport properties are evaluated using established cubic equation of state and extended corresponding state principles with established mixing rules. To efficiently account for the large spatial gradients in property variations, an adaptive mesh refinement technique is employed. The transcritical simulation results are compared with the predictions from the traditional subcritical jet atomization simulations.
Fowler, K.K.; Wilson, J.T.
1995-01-01
Surveys of the instream pond determined that the volume of sediment delivered to the pond from April 1993 to April 1994 was approximately 26,500 cubic feet. The average volume weight of the sediment was determined to be 102 pounds per cubic foot. The sediment yield for the upper reach of Juday Creek from April 1993 to April 1994 was estimated to be 48 tons per square mile.
Paul F. Doruska; David W. Patterson; Travis E. Posey
2006-01-01
A study was undertaken to investigate and report scaling factor variation for loblolly pine sawtimber in the Coastal Plain of Arkansas. Scaling factors for butt logs averaged 65.6 pounds per cubic foot for trees in stands of naturally seeded origin and 65.0 pounds per cubic foot for plantation trees. These scaling factors were not significantly different by stand...
Schauer, Kevin L.; Freund, Dana M.; Prenni, Jessica E.
2013-01-01
Metabolic acidosis is a relatively common pathological condition that is defined as a decrease in blood pH and bicarbonate concentration. The renal proximal convoluted tubule responds to this condition by increasing the extraction of plasma glutamine and activating ammoniagenesis and gluconeogenesis. The combined processes increase the excretion of acid and produce bicarbonate ions that are added to the blood to partially restore acid-base homeostasis. Only a few cytosolic proteins, such as phosphoenolpyruvate carboxykinase, have been determined to play a role in the renal response to metabolic acidosis. Therefore, further analysis was performed to better characterize the response of the cytosolic proteome. Proximal convoluted tubule cells were isolated from rat kidney cortex at various times after onset of acidosis and fractionated to separate the soluble cytosolic proteins from the remainder of the cellular components. The cytosolic proteins were analyzed using two-dimensional liquid chromatography and tandem mass spectrometry (MS/MS). Spectral counting along with average MS/MS total ion current were used to quantify temporal changes in relative protein abundance. In all, 461 proteins were confidently identified, of which 24 exhibited statistically significant changes in abundance. To validate these techniques, several of the observed abundance changes were confirmed by Western blotting. Data from the cytosolic fractions were then combined with previous proteomic data, and pathway analyses were performed to identify the primary pathways that are activated or inhibited in the proximal convoluted tubule during the onset of metabolic acidosis. PMID:23804448
Volumetric multimodality neural network for brain tumor segmentation
NASA Astrophysics Data System (ADS)
Silvana Castillo, Laura; Alexandra Daza, Laura; Carlos Rivera, Luis; Arbeláez, Pablo
2017-11-01
Brain lesion segmentation is one of the hardest tasks to be solved in computer vision with an emphasis on the medical field. We present a convolutional neural network that produces a semantic segmentation of brain tumors, capable of processing volumetric data along with information from multiple MRI modalities at the same time. This results in the ability to learn from small training datasets and highly imbalanced data. Our method is based on DeepMedic, the state of the art in brain lesion segmentation. We develop a new architecture with more convolutional layers, organized in three parallel pathways with different input resolution, and additional fully connected layers. We tested our method over the 2015 BraTS Challenge dataset, reaching an average dice coefficient of 84%, while the standard DeepMedic implementation reached 74%.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.
Topographic Map of Quadrangle 3568, Polekhomri (503) and Charikar (504) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3464, Shahrak (411) and Kasi (412) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3364, Pasa-Band (417) and Kejran (418) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3366, Gizab (513) and Nawer (514) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3462, Herat (409) and Chesht-Sharif (410) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3362, Shin-Dand (415) and Tulak (416) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3670, Jam-Kashem (223) and Zebak (224) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3466, Lal-Sarjangal (507) and Bamyan (508) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3164, Lashkargah (605) and Kandahar (606) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3162, Chakhansur (603) and Kotalak (604) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3166, Jaldak (701) and Maruf-Nawa (702) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Topographic Map of Quadrangle 3266, Ourzgan (519) and Moqur (520) Quadrangles, Afghanistan
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
Average luminosity distance in inhomogeneous universes
NASA Astrophysics Data System (ADS)
Kostov, Valentin Angelov
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Fifty-year flood-inundation maps for Nacaome, Honduras
Kresch, David L.; Mastin, M.C.; Olsen, T.D.
2002-01-01
After the devastating floods caused by Hurricane Mitch in 1998, maps of the areas and depths of 50-year-flood inundation at 15 municipalities in Honduras were prepared as a tool for agencies involved in reconstruction and planning. This report, which is one in a series of 15, presents maps of areas in the municipality of Nacaome that would be inundated by 50-year floods on Rio Nacaome, Rio Grande, and Rio Guacirope. Geographic Information System (GIS) coverages of the flood inundation are available on a computer in the municipality of Nacaome as part of the Municipal GIS project and on the Internet at the Flood Hazard Mapping Web page (http://mitchnts1.cr.usgs.gov/projects/floodhazard.html). These coverages allow users to view the flood inundation in much more detail than is possible using the maps in this report. Water-surface elevations for 50-year-floods on Rio Nacaome, Rio Grande, and Rio Guacirope at Nacaome were computed using HEC-RAS, a one-dimensional, steady-flow, step-backwater computer program. The channel and floodplain cross sections used in HEC-RAS were developed from an airborne light-detection-and-ranging (LIDAR) topographic survey of the area and ground surveys at two bridges. The estimated 50-year-flood discharge for Rio Nacaome at Nacaome, 5,040 cubic meters per second, was computed as the drainage-area-adjusted weighted average of two independently estimated 50-year-flood discharges for the gaging station Rio Nacaome en Las Mercedes, located about 13 kilometers upstream from Nacaome. One of the discharges, 4,549 cubic meters per second, was estimated from a frequency analysis of the 16 years of peak-discharge record for the gage, and the other, 1,922 cubic meters per second, was estimated from a regression equation that relates the 50-year-flood discharge to drainage area and mean annual precipitation. The weighted-average of the two discharges is 3,770 cubic meters per second. The 50-year-flood discharges for Rio Grande, 3,890 cubic meters per second, and Rio Guacirope, 1,080 cubic meters per second, were also computed by adjusting the weighted-average 50-year-flood discharge for the Rio Nacaome en Las Mercedes gaging station for the difference in drainage areas between the gage and these river reaches.
Recent trends and changes in freshwater discharge into Hudson, James, and Ungava Bays
NASA Astrophysics Data System (ADS)
Déry, S. J.; Stieglitz, M.; McKenna, E.; Wood, E. F.
2004-05-01
Recent trends and changes in the observed river discharge into Hudson, James, and Ungava Bays (HJUBs) for the period 1964-1994 will be presented. Forty-two rivers with outlets into these bays contribute on average 700 cubic kilometers (= 0.02 sverdrups) of freshwater to the Arctic Ocean. River discharge attains a mean annual peak of 4.2 cubic kilometers per day on average each 17 June for the system as a whole, whereas the minimum of 0.6 cubic kilometers occurs on average each 3 April. The Nelson River supplies as much as 30% of the daily discharge for the entire system during winter, but diminishes in relative importance during spring and summer. Runoff rates per contributing area are highest (lowest) on the eastern (western) shores of Hudson and James Bays. Linear trend analyses reveal decreasing discharge in 38 out of the 42 rivers over the 31-year period. By 1994, the total annual freshwater discharge into the Arctic Ocean diminished by 110 cubic kilometers from its values in 1964, equivalent to a reduction of 0.0035 sverdrups. The annual peak discharge rates associated with snowmelt advanced by 16 days between 1964 and 1994 and has diminished slightly in intensity. There is a direct correlation between the time of this hydrological event and the latitude of a river's mouth; the timing of the peak discharge rates varies by 5 days for each degree of latitude. Continental snowmelt induces a seasonal pulse of freshwater from HJUBs that is tracked along its path into the Labrador Current and that coincides with ocean salinity anomalies on the inner Newfoundland Shelf. The talk will end with a discussion on the implications of a changing freshwater regime in HJUBs.
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.
Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita
2018-03-01
Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-18
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.
Preliminary evaluation of the feasibility of artificial recharge in northern Qater
Vecchioli, John
1976-01-01
Fresh ground water in northern Qatar occurs as a lens in limestone and dolomite of Eocene age. Natural recharge from precipitation averages 17x106 cubic metres per year whereas current discharge averages 26.6x106 cubic metres per year. Depletion of storage is accompanied by a deterioration in quality due to encroachment of salty water from the Gulf and from underlying formations. Artificial recharge with desalted sea water to permit additional agricultural development appears technically feasible but its practicability needs to be examined further. A hydrogeological appraisal including test drilling, geophysical logging, pumping tests, and a recharge test, coupled with engineering analysis of direct surface storage/distribution of desalted sea water versus aquifer storage/distribution, is recommended.
Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks.
Liu, Jiamin; Wang, David; Lu, Le; Wei, Zhuoshi; Kim, Lauren; Turkbey, Evrim B; Sahiner, Berkman; Petrick, Nicholas A; Summers, Ronald M
2017-09-01
Colitis refers to inflammation of the inner lining of the colon that is frequently associated with infection and allergic reactions. In this paper, we propose deep convolutional neural networks methods for lesion-level colitis detection and a support vector machine (SVM) classifier for patient-level colitis diagnosis on routine abdominal CT scans. The recently developed Faster Region-based Convolutional Neural Network (Faster RCNN) is utilized for lesion-level colitis detection. For each 2D slice, rectangular region proposals are generated by region proposal networks (RPN). Then, each region proposal is jointly classified and refined by a softmax classifier and bounding-box regressor. Two convolutional neural networks, eight layers of ZF net and 16 layers of VGG net are compared for colitis detection. Finally, for each patient, the detections on all 2D slices are collected and a SVM classifier is applied to develop a patient-level diagnosis. We trained and evaluated our method with 80 colitis patients and 80 normal cases using 4 × 4-fold cross validation. For lesion-level colitis detection, with ZF net, the mean of average precisions (mAP) were 48.7% and 50.9% for RCNN and Faster RCNN, respectively. The detection system achieved sensitivities of 51.4% and 54.0% at two false positives per patient for RCNN and Faster RCNN, respectively. With VGG net, Faster RCNN increased the mAP to 56.9% and increased the sensitivity to 58.4% at two false positive per patient. For patient-level colitis diagnosis, with ZF net, the average areas under the ROC curve (AUC) were 0.978 ± 0.009 and 0.984 ± 0.008 for RCNN and Faster RCNN method, respectively. The difference was not statistically significant with P = 0.18. At the optimal operating point, the RCNN method correctly identified 90.4% (72.3/80) of the colitis patients and 94.0% (75.2/80) of normal cases. The sensitivity improved to 91.6% (73.3/80) and the specificity improved to 95.0% (76.0/80) for the Faster RCNN method. With VGG net, Faster RCNN increased the AUC to 0.986 ± 0.007 and increased the diagnosis sensitivity to 93.7% (75.0/80) and specificity was unchanged at 95.0% (76.0/80). Colitis detection and diagnosis by deep convolutional neural networks is accurate and promising for future clinical application. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
Iron Oxide Nanospheres and Nanocubes for Magnetic Hyperthermia Therapy: A Comparative Study
NASA Astrophysics Data System (ADS)
Nemati, Z.; Das, R.; Alonso, J.; Clements, E.; Phan, M. H.; Srikanth, H.
2017-06-01
Improving the heating capacity of magnetic nanoparticles (MNPs) for hyperthermia therapy is an important but challenging task. Through a comparative study of the inductive heating properties of spherical and cubic Fe3O4 MNPs with two distinct average volumes (˜7000 nm3 and 80,000 nm3), we demonstrate that, for small size (˜7000 nm3), the cubic MNPs heat better compared with the spherical MNPs. However, the opposite trend is observed for larger size (˜80,000 nm3). The improvement in heating efficiency in cubic small-sized MNPs (˜7000 nm3) can be attributed to enhanced anisotropy and the formation of chain-like aggregates, whereas the decrease of the heating efficiency in cubic large-sized MNPs (˜80,000 nm3) has been attributed to stronger aggregation of particles. Physical motion is shown to contribute more to the heating efficiency in case of spherical than cubic MNPs, when dispersed in water. These findings are of crucial importance in understanding the role of shape anisotropy and optimizing the heating response of magnetic nano-structures for advanced hyperthermia.
Naqvi, Shahid A; D'Souza, Warren D
2005-04-01
Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Channel movement of meandering Indiana streams
Daniel, James F.
1971-01-01
Because of the consistency of yearly above-average discharge volumes, it was possible to develop a general relation between path-length increase per thousand cubic-feet-per-second-days per square mile of drainage area above average discharge and the width-depth ratio of the channel. Little progress was made toward defining relationships for rotation and translation.
Xu, W; LeBeau, J M
2018-05-01
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.
Masterson, John P.; Pope, Jason P.; Fienen, Michael N.; Monti, Jr., Jack; Nardi, Mark R.; Finkelstein, Jason S.
2016-08-31
The U.S. Geological Survey developed a groundwater flow model for the Northern Atlantic Coastal Plain aquifer system from Long Island, New York, to northeastern North Carolina as part of a detailed assessment of the groundwater availability of the area and included an evaluation of how these resources have changed over time from stresses related to human uses and climate trends. The assessment was necessary because of the substantial dependency on groundwater for agricultural, industrial, and municipal needs in this area.The three-dimensional, groundwater flow model developed for this investigation used the numerical code MODFLOW–NWT to represent changes in groundwater pumping and aquifer recharge from predevelopment (before 1900) to future conditions, from 1900 to 2058. The model was constructed using existing hydrogeologic and geospatial information to represent the aquifer system geometry, boundaries, and hydraulic properties of the 19 separate regional aquifers and confining units within the Northern Atlantic Coastal Plain aquifer system and was calibrated using an inverse modeling parameter-estimation (PEST) technique.The parameter estimation process was achieved through history matching, using observations of heads and flows for both steady-state and transient conditions. A total of 8,868 annual water-level observations from 644 wells from 1986 to 2008 were combined into 29 water-level observation groups that were chosen to focus the history matching on specific hydrogeologic units in geographic areas in which distinct geologic and hydrologic conditions were observed. In addition to absolute water-level elevations, the water-level differences between individual measurements were also included in the parameter estimation process to remove the systematic bias caused by missing hydrologic stresses prior to 1986. The total average residual of –1.7 feet was normally distributed for all head groups, indicating minimal bias. The average absolute residual value of 12.3 feet is about 3 percent of the total observed water-level range throughout the aquifer system.Streamflow observation data of base flow conditions were derived for 153 sites from the U.S. Geological Survey National Hydrography Dataset Plus and National Water Information System. An average residual of about –8 cubic feet per second and an average absolute residual of about 21 cubic feet per second for a range of computed base flows of about 417 cubic feet per second were calculated for the 122 sites from the National Hydrography Dataset Plus. An average residual of about 10 cubic feet per second and an average absolute residual of about 34 cubic feet per second were calculated for the 568 flow measurements in the 31 sites obtained from the National Water Information System for a range in computed base flows of about 1,141 cubic feet per second.The numerical representation of the hydrogeologic information used in the development of this regional flow model was dependent upon how the aquifer system and simulated hydrologic stresses were discretized in space and time. Lumping hydraulic parameters in space and hydrologic stresses and time-varying observational data in time can limit the capabilities of this tool to simulate how the groundwater flow system responds to changes in hydrologic stresses, particularly at the local scale.
River gain and loss studies for the Red River of the North Basin, North Dakota and Minnesota
Williams-Sether, Tara
2004-01-01
The Dakota Water Resources Act passed by the U.S. Congress in 2000 authorized the Secretary of the Interior to conduct a comprehensive study of future water-quantity and -quality needs of the Red River of the North (Red River) Basin in North Dakota and of possible options to meet those water needs. To obtain the river gain and loss information needed to properly account for available streamflow within the basin, available river gain and loss studies for the Sheyenne, Turtle, Forest, and Park Rivers in North Dakota and the Wild Rice, Sand Hill, Clearwater, South Branch Buffalo, and Otter Tail Rivers in Minnesota were reviewed. Ground-water discharges for the Sheyenne River in a reach between Lisbon and Kindred, N. Dak., were about 28.8 cubic feet per second in 1963 and about 45.0 cubic feet per second in 1986. Estimated monthly net evaporation losses for additional flows to the Sheyenne River from the Missouri River ranged from 1.4 cubic feet per second in 1963 to 51.0 cubic feet per second in 1976. Maximum water losses for a reach between Harvey and West Fargo, N. Dak., for 1956-96 ranged from about 161 cubic feet per second for 1976 to about 248 cubic feet per second for 1977. Streamflow gains of 1 to 1.5 cubic feet per second per mile were estimated for the Wild Rice, Sand Hill, and Clearwater Rivers in Minnesota. The average ground-water discharge for a 5.2-mile reach of the Otter Tail River in Minnesota was about 14.1 cubic feet per second in August 1994. The same reach lost about 14.1 cubic feet per second between February 1994 and June 1994 and about 21.2 cubic feet per second between August 1994 and August 1995.
Mechanical and Thermophysical Properties of Cubic Rock-Salt AlN Under High Pressure
NASA Astrophysics Data System (ADS)
Lebga, Noudjoud; Daoud, Salah; Sun, Xiao-Wei; Bioud, Nadhira; Latreche, Abdelhakim
2018-03-01
Density functional theory, density functional perturbation theory, and the Debye model have been used to investigate the structural, elastic, sound velocity, and thermodynamic properties of AlN with cubic rock-salt structure under high pressure, yielding the equilibrium structural parameters, equation of state, and elastic constants of this interesting material. The isotropic shear modulus, Pugh ratio, and Poisson's ratio were also investigated carefully. In addition, the longitudinal, transverse, and average elastic wave velocities, phonon contribution to the thermal conductivity, and interesting thermodynamic properties were predicted and analyzed in detail. The results demonstrate that the behavior of the elastic wave velocities under increasing hydrostatic pressure explains the hardening of the corresponding phonons. Based on the elastic stability criteria under pressure, it is found that AlN with cubic rock-salt structure is mechanically stable, even at pressures up to 100 GPa. Analysis of the Pugh ratio and Poisson's ratio revealed that AlN with cubic rock-salt structure behaves in brittle manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, N; Najafi, M; Hancock, S
Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculatedmore » as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.« less
NASA Technical Reports Server (NTRS)
Haines, Jennifer C.; Chen, Lung-Wen A.; Taubman, Brett F.; Doddridge, Bruce G.; Dickerson, Russell R.
2007-01-01
Reliable determination of the effects of air quality on public health and the environment requires accurate measurement of PM(sub 2.5) mass and the individual chemical components of fine aerosols. This study seeks to evaluate PM(sub 2.5) measurements that are part of a newly established national network by comparing them with a more conventional sampling system. Experiments were carried out during 2002 at a suburban site in Maryland, United States, where two samplers from the U.S. Environmental Protection Agency (USEPA) Speciation Trends Network: Met One Speciation Air Sampling System STNS and Thermo Scientific Reference Ambient Air Sampler STNR, two Desert Research Institute Sequential Filter Samplers DRIF, and a continuous TEOM monitor (Thermo Scientific Tapered Element Oscillating Microbalance) were sampling air in parallel. These monitors differ not only in sampling configuration but also in protocol-specific sample analysis procedures. Measurements of PM(sub 2.5) mass and major contributing species were well correlated among the different methods with r-values > 0.8. Despite the good correlations, daily concentrations of PM(sub 2.5) mass and major contributing species were significantly different at the 95% confidence level from 5 to 100% of the time. Larger values of PM(sub 2.5) mass and individual species were generally reported from STNR and STNS. The January STNR average PM(sub 2.5) mass (8.8 (micro)g/per cubic meter) was 1.5 (micro)g/per cubic meter larger than the DRIF average mass. The July STNS average PM(sub 2.5) mass (27.8 (micro)g/per cubic meter) was 3.8 (micro)g/per cubic meter larger than the DRIF average mass. These differences can only be partially accounted for by known random errors. Variations in flow control, face velocity, and sampling artifacts likely influence the measurement of PM(sub 2.5) speciation and mass closure. Simple statistical tests indicate that the current uncertainty estimates used in the STN network may underestimate the actual uncertainty.
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Myocardial scar segmentation from magnetic resonance images using convolutional neural network
NASA Astrophysics Data System (ADS)
Zabihollahy, Fatemeh; White, James A.; Ukwatta, Eranga
2018-02-01
Accurate segmentation of the myocardial fibrosis or scar may provide important advancements for the prediction and management of malignant ventricular arrhythmias in patients with cardiovascular disease. In this paper, we propose a semi-automated method for segmentation of myocardial scar from late gadolinium enhancement magnetic resonance image (LGE-MRI) using a convolutional neural network (CNN). In contrast to image intensitybased methods, CNN-based algorithms have the potential to improve the accuracy of scar segmentation through the creation of high-level features from a combination of convolutional, detection and pooling layers. Our developed algorithm was trained using 2,336,703 image patches extracted from 420 slices of five 3D LGE-MR datasets, then validated on 2,204,178 patches from a testing dataset of seven 3D LGE-MR images including 624 slices, all obtained from patients with chronic myocardial infarction. For evaluation of the algorithm, we compared the algorithmgenerated segmentations to manual delineations by experts. Our CNN-based method reported an average Dice similarity coefficient (DSC), precision, and recall of 94.50 +/- 3.62%, 96.08 +/- 3.10%, and 93.96 +/- 3.75% as the accuracy of segmentation, respectively. As compared to several intensity threshold-based methods for scar segmentation, the results of our developed method have a greater agreement with manual expert segmentation.
Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.
Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua
2017-06-01
Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.
Quantization and training of object detection networks with low-precision weights and activations
NASA Astrophysics Data System (ADS)
Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie
2018-01-01
As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.
Semantic segmentation of mFISH images using convolutional networks.
Pardo, Esteban; Morgado, José Mário T; Malpica, Norberto
2018-04-30
Multicolor in situ hybridization (mFISH) is a karyotyping technique used to detect major chromosomal alterations using fluorescent probes and imaging techniques. Manual interpretation of mFISH images is a time consuming step that can be automated using machine learning; in previous works, pixel or patch wise classification was employed, overlooking spatial information which can help identify chromosomes. In this work, we propose a fully convolutional semantic segmentation network for the interpretation of mFISH images, which uses both spatial and spectral information to classify each pixel in an end-to-end fashion. The semantic segmentation network developed was tested on samples extracted from a public dataset using cross validation. Despite having no labeling information of the image it was tested on, our algorithm yielded an average correct classification ratio (CCR) of 87.41%. Previously, this level of accuracy was only achieved with state of the art algorithms when classifying pixels from the same image in which the classifier has been trained. These results provide evidence that fully convolutional semantic segmentation networks may be employed in the computer aided diagnosis of genetic diseases with improved performance over the current image analysis methods. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612
Automatic lung nodule graph cuts segmentation with deep learning false positive reduction
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei
2017-03-01
To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.
NASA Astrophysics Data System (ADS)
Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng
2018-02-01
The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.
Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks
NASA Astrophysics Data System (ADS)
Roth, Holger; Oda, Masahiro; Shimizu, Natsuki; Oda, Hirohisa; Hayashi, Yuichiro; Kitasaka, Takayuki; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku
2018-03-01
Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 +/- 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.
Crowd density estimation based on convolutional neural networks with mixed pooling
NASA Astrophysics Data System (ADS)
Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming
2017-09-01
Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.
Tweaked residual convolutional network for face alignment
NASA Astrophysics Data System (ADS)
Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu
2017-08-01
We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.
Chemical Clarification Methods for Confined Dredged Material Disposal.
1983-07-01
foot second per metre cubic yards 0.7645549 cubic metres Farenheit degrees 5/9 Celsius degrees or Kelvins* feet 0.3048 metres feet per minute 0.3048...unknown in freshwater environments, use zero S.G. = specific gravity of solids; use 2.67 if unknown Wt. H20 [(weight of wet sample and dish, g...62.4 lb/ft v = average velocity, ft/sec Ps= absolute viscosity, 2.36 x 10-5 at 60F The duration t of the mixing is determined by t =L (6) v The net
Adams, G.P.; Bergman, D.L.
1996-01-01
Ground water in 1,305 square miles of Quaternary alluvium and terrace deposits along the Cimarron River from Freedom to Guthrie, Oklahoma, is used for irrigation, municipal, stock, and domestic supplies. As much as 120 feet of clay, silt, sand, and gravel form an unconfined aquifer with an average saturated thickness of 28 feet. The 1985-86 water in storage, assuming a specific yield of 0.20, was 4.47 million acre-feet. The aquifer is bounded laterally and underlain by relatively impermeable Permian geologic units. Regional ground-water flow is generally southeast to southwest toward the Cimarron River, except where the flow direction is affected by perennial tributaries. Estimated average recharge to the aquifer is 207 cubic feet per second. Estimated average discharge from the aquifer by seepage and evapotranspiration is 173 cubic feet per second. Estimated 1985 discharge by withdrawals from wells was 24.43 cubic feet per second. Most water in the terrace deposits varied from a calcium bicarbonate to mixed bicarbonate type, with median dissolved-solids concentration of 538 milligrams per liter. Cimarron River water is a sodium chloride type with up to 16,600 milligrams per liter dissolved solids. A finite-difference ground-water flow model was developed and calibrated to test the conceptual model of the aquifer under steady-state conditions. The model was calibrated to match 1985-86 aquifer heads and discharge to the Cimarron River between Waynoka and Dover.
Landsat TM image maps of the Shirase and Siple Coast ice streams, West Antarctica
Ferrigno, Jane G.; Mullins, Jerry L.; Stapleton, Jo Anne; Bindschadler, Robert; Scambos, Ted A.; Bellisime, Lynda B.; Bowell, Jo-Ann; Acosta, Alex V.
1994-01-01
Fifteen 1: 250000 and one 1: 1000 000 scale Landsat Thematic Mapper (TM) image mosaic maps are currently being produced of the West Antarctic ice streams on the Shirase and Siple Coasts. Landsat TM images were acquired between 1984 and 1990 in an area bounded approximately by 78°-82.5°S and 120°- 160° W. Landsat TM bands 2, 3 and 4 were combined to produce a single band, thereby maximizing data content and improving the signal-to-noise ratio. The summed single band was processed with a combination of high- and low-pass filters to remove longitudinal striping and normalize solar elevation-angle effects. The images were mosaicked and transformed to a Lambert conformal conic projection using a cubic-convolution algorithm. The projection transformation was controled with ten weighted geodetic ground-control points and internal image-to-image pass points with annotation of major glaciological features. The image maps are being published in two formats: conventional printed map sheets and on a CD-ROM.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barraclough, Brendan; Lebron, Sharon; Li, Jonathan G.
2016-05-15
Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian,more » Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to represent the response of the studied ionization chambers, the Gaussian function was favored due to its superior overall performance. The geometry dependence of the DRFs can be significant for clinical applications involving small fields such as stereotactic radiotherapy.« less
Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2016-05-01
To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit "real" ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%-80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Although all three DRFs were found adequate to represent the response of the studied ionization chambers, the Gaussian function was favored due to its superior overall performance. The geometry dependence of the DRFs can be significant for clinical applications involving small fields such as stereotactic radiotherapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devpura, S; Li, H; Liu, C
Purpose: To correlate dose distributions computed using six algorithms for recurrent early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT), with outcome (local failure). Methods: Of 270 NSCLC patients treated with 12Gyx4, 20 were found to have local recurrence prior to the 2-year time point. These patients were originally planned with 1-D pencil beam (1-D PB) algorithm. 4D imaging was performed to manage tumor motion. Regions of local failures were determined from follow-up PET-CT scans. Follow-up CT images were rigidly fused to the planning CT (pCT), and recurrent tumor volumes (Vrecur) were mapped to themore » pCT. Dose was recomputed, retrospectively, using five algorithms: 3-D PB, collapsed cone convolution (CCC), anisotropic analytical algorithm (AAA), AcurosXB, and Monte Carlo (MC). Tumor control probability (TCP) was computed using the Marsden model (1,2). Patterns of failure were classified as central, in-field, marginal, and distant for Vrecur ≥95% of prescribed dose, 95–80%, 80–20%, and ≤20%, respectively (3). Results: Average PTV D95 (dose covering 95% of the PTV) for 3-D PB, CCC, AAA, AcurosXB, and MC relative to 1-D PB were 95.3±2.1%, 84.1±7.5%, 84.9±5.7%, 86.3±6.0%, and 85.1±7.0%, respectively. TCP values for 1-D PB, 3-D PB, CCC, AAA, AcurosXB, and MC were 98.5±1.2%, 95.7±3.0, 79.6±16.1%, 79.7±16.5%, 81.1±17.5%, and 78.1±20%, respectively. Patterns of local failures were similar for 1-D and 3D PB plans, which predicted that the majority of failures occur in centraldistal regions, with only ∼15% occurring distantly. However, with convolution/superposition and MC type algorithms, the majority of failures (65%) were predicted to be distant, consistent with the literature. Conclusion: Based on MC and convolution/superposition type algorithms, average PTV D95 and TCP were ∼15% lower than the planned 1-D PB dose calculation. Patterns of failure results suggest that MC and convolution/superposition type algorithms predict different outcomes for patterns of failure relative to PB algorithms. Work supported in part by Varian Medical Systems, Palo Alto, CA.« less
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
Rawson, Jack; Goss, Richard L.; Rathbun, Ira G.
1980-01-01
A three-phase study was conducted during July and August 1979 to determine the effects of varying release rates through the power-outlet works at Sam Rayburn Reservoir, eastern Texas, on aeration capacity of a 14-mile reach of the Angelina River below Sam Rayburn Dam. The dominant factors that affected the aeration capacity during the study time were time of travel and the dissolved-oxygen deficit of the releases. Aeration was low throughout the study but increased in response to increases in the dissolved-oxygen deficit and the duration of time that the releases were exposed to the atmosphere (time of travel). The average concentration of dissolved oxygen sustained by release of 8,800 cubic feet per second decreased from 5.0 milligrams per liter at a site near the power outlet to 4.8 milligrams per liter at a site about 14 miles downstream; the time of travel averaged about 8 hours. The average concentration of dissolved oxygen in flow sustained by releases of 2,200 cubic feet per second increased from 5.2 to 5.5 milligrams per liter; the time of travel averaged about 20 hours. (USGS)
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngoi, Kuan Hoon; Chia, Chin-Hua, E-mail: chia@ukm.edu.my; Zakaria, Sarani
2015-09-25
We report on the effect of using reducing agents with different chain-length on the synthesis of iron oxide nanoparticles by thermal decomposition of iron (III) acetylacetonate in 1-octadecene. This modification allows us to control the shape of nanoparticles into spherical and cubic iron oxide nanoparticles. The highly monodisperse 14 nm spherical nanoparticles are obtained under 1,2-dodecanediol and average 14 nm edge-length cubic iron oxide nanoparticles are obtained under 1,2-tetradecanediol. The structural characterization such as transmission electron microscope (TEM) and X-ray diffraction (XRD) shows similar properties between two particles with different shapes. The vibrating sample magnetometer (VSM) shows no significant difference between sphericalmore » and cubic nanoparticles, which are 36 emu/g and 37 emu/g respectively and superparamagnetic in nature.« less
A Firefly-Inspired Method for Protein Structure Prediction in Lattice Models
Maher, Brian; Albrecht, Andreas A.; Loomes, Martin; Yang, Xin-She; Steinhöfel, Kathleen
2014-01-01
We introduce a Firefly-inspired algorithmic approach for protein structure prediction over two different lattice models in three-dimensional space. In particular, we consider three-dimensional cubic and three-dimensional face-centred-cubic (FCC) lattices. The underlying energy models are the Hydrophobic-Polar (H-P) model, the Miyazawa–Jernigan (M-J) model and a related matrix model. The implementation of our approach is tested on ten H-P benchmark problems of a length of 48 and ten M-J benchmark problems of a length ranging from 48 until 61. The key complexity parameter we investigate is the total number of objective function evaluations required to achieve the optimum energy values for the H-P model or competitive results in comparison to published values for the M-J model. For H-P instances and cubic lattices, where data for comparison are available, we obtain an average speed-up over eight instances of 2.1, leaving out two extreme values (otherwise, 8.8). For six M-J instances, data for comparison are available for cubic lattices and runs with a population size of 100, where, a priori, the minimum free energy is a termination criterion. The average speed-up over four instances is 1.2 (leaving out two extreme values, otherwise 1.1), which is achieved for a population size of only eight instances. The present study is a test case with initial results for ad hoc parameter settings, with the aim of justifying future research on larger instances within lattice model settings, eventually leading to the ultimate goal of implementations for off-lattice models. PMID:24970205
A firefly-inspired method for protein structure prediction in lattice models.
Maher, Brian; Albrecht, Andreas A; Loomes, Martin; Yang, Xin-She; Steinhöfel, Kathleen
2014-01-07
We introduce a Firefly-inspired algorithmic approach for protein structure prediction over two different lattice models in three-dimensional space. In particular, we consider three-dimensional cubic and three-dimensional face-centred-cubic (FCC) lattices. The underlying energy models are the Hydrophobic-Polar (H-P) model, the Miyazawa-Jernigan (M-J) model and a related matrix model. The implementation of our approach is tested on ten H-P benchmark problems of a length of 48 and ten M-J benchmark problems of a length ranging from 48 until 61. The key complexity parameter we investigate is the total number of objective function evaluations required to achieve the optimum energy values for the H-P model or competitive results in comparison to published values for the M-J model. For H-P instances and cubic lattices, where data for comparison are available, we obtain an average speed-up over eight instances of 2.1, leaving out two extreme values (otherwise, 8.8). For six M-J instances, data for comparison are available for cubic lattices and runs with a population size of 100, where, a priori, the minimum free energy is a termination criterion. The average speed-up over four instances is 1.2 (leaving out two extreme values, otherwise 1.1), which is achieved for a population size of only eight instances. The present study is a test case with initial results for ad hoc parameter settings, with the aim of justifying future research on larger instances within lattice model settings, eventually leading to the ultimate goal of implementations for off-lattice models.
Learning Hierarchical Feature Extractors for Image Recognition
2012-09-01
space as a natural criterion for devising better pools. Finally, we propose ways to make coding faster and more powerful through fast convolutional...parameter is the set of pools over which the summary statistic is computed. We propose locality in feature configuration space as a natural criterion for...pooling (dotted lines) is consistently higher than average pooling (solid lines), but the gap is much less signif - icant with intersection kernel (closed
NASA Astrophysics Data System (ADS)
Al-Hallaq, H. A.; Reft, C. S.; Roeske, J. C.
2006-03-01
The dosimetric effects of bone and air heterogeneities in head and neck IMRT treatments were quantified. An anthropomorphic RANDO phantom was CT-scanned with 16 thermoluminescent dosimeter (TLD) chips placed in and around the target volume. A standard IMRT plan generated with CORVUS was used to irradiate the phantom five times. On average, measured dose was 5.1% higher than calculated dose. Measurements were higher by 7.1% near the heterogeneities and by 2.6% in tissue. The dose difference between measurement and calculation was outside the 95% measurement confidence interval for six TLDs. Using CORVUS' heterogeneity correction algorithm, the average difference between measured and calculated doses decreased by 1.8% near the heterogeneities and by 0.7% in tissue. Furthermore, dose differences lying outside the 95% confidence interval were eliminated for five of the six TLDs. TLD doses recalculated by Pinnacle3's convolution/superposition algorithm were consistently higher than CORVUS doses, a trend that matched our measured results. These results indicate that the dosimetric effects of air cavities are larger than those of bone heterogeneities, thereby leading to a higher delivered dose compared to CORVUS calculations. More sophisticated algorithms such as convolution/superposition or Monte Carlo should be used for accurate tailoring of IMRT dose in head and neck tumours.
Sedimentation History of Lago Dos Bocas, Puerto Rico, 1942-2005
Soler-López, Luis R.
2007-01-01
The Lago Dos Bocas Dam, located in the municipality of Utuado in north central Puerto Rico, was constructed in 1942 for hydroelectric power generation. The reservoir had an original storage capacity of 37.50 million cubic meters and a drainage area of 440 square kilometers. In 1948, the construction of the Lago Caonillas Dam on the Rio Caonillas branch of Lago Dos Bocas reduced the natural sediment-contributing drainage area to 310 square kilometers; therefore, the Lago Caonillas Dam is considered an effective sediment trap. Sedimentation in Lago Dos Bocas reservoir has reduced the storage capacity from 37.50 million cubic meters in 1942 to 17.26 million cubic meters in 2005, which represents a storage loss of about 54 percent. The long-term annual water-storage capacity loss rate remained nearly constant at about 320,000 cubic meters per year to about 1997. The inter-survey sedimentation rate between 1997 and 1999, however, is higher than the long-term rate at about 1.09 million cubic meters per year. Between 1999 and 2005 the rate is lower than the long-term rate at about 0.13 million cubic meters per year. The Lago Dos Bocas effective sediment-contributing drainage area had an average sediment yield of about 1,400 cubic meters per square kilometer per year between 1942 and 1997. This rate increased substantially by 1999 to about 4,600 cubic meters per square kilometer per year, probably resulting from the historical magnitude floods caused by Hurricane Georges in 1998. Recent data indicate that the Lago Dos Bocas drainage area sediment yield decreased substantially to about 570 cubic meters per square kilometer per year, which is much lower than the 1942-1997 area normalized sedimentation rate of 1,235 cubic meters per square kilometer per year. The impact of Hurricane Georges on the basin sediment yield could have been the cause of this change, since the magnitude of the floods could have nearly depleted the Lago Dos Bocas drainage area of easily erodible and transportable bed sediment. This report summarizes the historical change in water-storage capacity of Lago Dos Bocas between 1942 and 2005.
Arkansas' timber resources updated, 1975
Roy C. Beltz
1975-01-01
The January 1, 1975, inventory is estimated to be 16.2 billion cubic feet of growing stock; about 60 percent is hardwoods. Both softwood and hardwood volume increased at an average rate of 1 percent per year since 1969.
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
Preparation and X-Ray diffraction studies of curium hydrides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, J.K.; Maire, R.G.
Curium hydrides were prepared by reaction of curium-248 metal with hydrogen and characterized by X-ray powder diffraction. Several of the syntheses resulted in a hexagonal compound with average lattice parameters of a/sub 0/ = 0.3769(8) nm and c/sub 0/ = 0.6732(12) nm. These products are considere to be CmH/sub 3//sup -//sub 8/ by analogy with the behavior of lanthanide-hydrogen and lighter actinide-hydrogen systems. Face-centered cubic products with an average lattice parameter of a/sub 0/ = 0.5322(4) nm were obtained from other curium hydride preparations. This parameter is slightly smaller than that reported previously for cubic curium dihydride, CmH /SUB 2-x/more » (B.M. Bansal and D. Damien. Inorg. Nucl. Chem. Lett. 6 603, 1970). The present results established a continuation of typical heavy trivalent lanthanidelike behavior of the transuranium actinide-hydrogen systems through curium.« less
Preparation and X-ray diffraction studies of curium hydrides
NASA Astrophysics Data System (ADS)
Gibson, J. K.; Haire, R. G.
1985-10-01
Curium hydrides were prepared by reaction of curium-248 metal with hydrogen and characterized by X-ray powder diffraction. Several of the syntheses resulted in a hexagonal compound with average lattice parameters of a0 = 0.3769(8) nm and c0 = 0.6732(12) nm. These products are considered to be CmH 3-δ by analogy with the behavior of lanthanide-hydrogen and lighter actinide-hydrogen systems. Face-centered cubic products with an average lattice parameter of a0 = 0.5322(4) nm were obtained from other curium hydride preparations. This parameter is slightly smaller than that reported previously for cubic curium dihydride, CmH 2+ x (B. M. Bansal and D. Damien, Inorg. Nucl. Chem. Lett., 6, 603, 1970). The present results established a continuation of typical heavy trivalent lanthanide-like behavior of the transuranium actinide-hydrogen systems through curium.
Retention time and flow patterns in Lake Marion, South Carolina, 1984
Patterson, G.G.; Harvey, R.M.
1995-01-01
In 1984, six dye tracer tests were made on Lake Marion to determine flow patterns and retention times under conditions of high and low flow. During the high-flow tests, with an average inflow of about 29,000 cubic feet per second, the approximate travel time through the lake for the peak tracer concentration was 14 days. The retention time was about 20 days. During the low-flow tests, with an average inflow of about 9,000 cubic feet per second, the approximate travel time was 41 days, and the retention time was about 60 days. The primary factors controlling movement of water in the lake are lake inflow and outflow. The tracer cloud moved consistently downstream, slowing as the lake widened. Flow patterns in most of the coves, and in some areas along the northeastern shore, are influenced more by tributary inflow than by factors attributable to water from the main body of the lake.
Chapter 6: cubic membranes the missing dimension of cell membrane organization.
Almsherqi, Zakaria A; Landh, Tomas; Kohlwein, Sepp D; Deng, Yuru
2009-01-01
Biological membranes are among the most fascinating assemblies of biomolecules: a bilayer less than 10 nm thick, composed of rather small lipid molecules that are held together simply by noncovalent forces, defines the cell and discriminates between "inside" and "outside", survival, and death. Intracellular compartmentalization-governed by biomembranes as well-is a characteristic feature of eukaryotic cells, which allows them to fulfill multiple and highly specialized anabolic and catabolic functions in strictly controlled environments. Although cellular membranes are generally visualized as flat sheets or closely folded isolated objects, multiple observations also demonstrate that membranes may fold into "unusual", highly organized structures with 2D or 3D periodicity. The obvious correlation of highly convoluted membrane organizations with pathological cellular states, for example, as a consequence of viral infection, deserves close consideration. However, knowledge about formation and function of these highly organized 3D periodic membrane structures is scarce, primarily due to the lack of appropriate techniques for their analysis in vivo. Currently, the only direct way to characterize cellular membrane architecture is by transmission electron microscopy (TEM). However, deciphering the spatial architecture solely based on two-dimensionally projected TEM images is a challenging task and prone to artifacts. In this review, we will provide an update on the current progress in identifying and analyzing 3D membrane architectures in biological systems, with a special focus on membranes with cubic symmetry, and their potential role in physiological and pathophysiological conditions. Proteomics and lipidomics approaches in defined experimental cell systems may prove instrumental to understand formation and function of 3D membrane morphologies.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-23
... standard was set at 15 micrograms per cubic meter ([mu]g/m\\3\\), based on the 3-year average of annual... 2.5 standard was set at 65 [mu]g/m\\3\\, based on the 3- year average of the 98th percentile of 24... partially approve the submittal based on EPA's independent evaluation of Nevada's impact on receptor states...
Small Hardwoods Reduce Growth of Pine Overstory
Charles X. Grano
1970-01-01
Dense understory hardwoods materially decreased the growth of a 53-year-old and a 47-year-old stand of loblolly and shortleaf pines. Over a 14-year period, hardwood eradication with chemicals increased average annual yield from the 53-year-old stand by 14.3 cubic feet, or 123 board-feet per acre. In the 47-year-old stand the average annual treatment advantage was...
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
Development of Ground Water in the Houston District, Texas, 1970-1974
Gabrysch, R.K.
1980-01-01
Total withdrawals of ground water in the Houston district increased 9 percent from about 488 million gallons per day (21.4 cubic meters per second) in 1970 to about 532 million gallons per day (23.3 cubic meters per second) in 1974. The average annual rate of increase from 1960 to 1969 was about 6.3 percent. During 1970-74, increases in pumpage occurred in the Houston, Katy, and NASA areas; decreases occurred in the Pasadena and Alta Lorna areas; and the pumpage in the Baytown-La Porte and Texas City areas remained almost constant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzmina, M S; Khazanov, E A
The problem on laser radiation propagation in a birefringent medium is solved with the allowance made for thermally induced linear birefringence under the conditions of cubic nonlinearity. It is shown that at high average and peak radiation powers the degree of isolation in a Faraday isolator noticeably reduces due to the cubic nonlinearity: by more than an order of magnitude when the B-integral is equal to unity. This effect is substantial for pulses with the energy of 0.2 – 3 J, duration of 10 ps to 4 ns and pulse repetition rate of 0.2 – 40 kHz. (components of lasermore » devices)« less
2010-09-01
cubic meter(s) mi mile(s) mi2 square mile(s) mm millimeter(s) m micrometer(s) yd3 cubic yard(s) ENVIRONMENTAL ASSESSMENT FOR...km2 (3,530 mi2 ) area that includes the NBAFS, less than two tornadoes occur per year. The localized area effected by a tornado averages only 0.29...km2 (0.11 mi2 ; Ramsdell and Andrews 1986) (ANL 2000). 3.2.2 Air Quality The State of New Hampshire Ambient Air Quality Standards (SAAQS) are
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Funkhouser, Jaysson E.; Barks, C. Shane
2003-01-01
A two-dimensional finite-element surface-water model was used to study the effects of the proposed modification to the U.S. Highway 79 corridor on flooding on the White River near Clarendon, Arkansas. The effects of floodflows were simulated for the following scenarios: existing, natural, and four proposed bridging alternatives. All of the scenarios were modeled with floods having the 5- and 100-year recurrence intervals (115,100 and 216,000 cubic feet per second). The simulated existing conditions included a 3,200-foot White River bridge located on the east side of the study area near Clarendon, Arkansas; a 3,700-foot First Old River bridge located 0.5 mile west of the White River bridge opening; and a 1,430-foot Roc Roe Bayou bridge located 1.6 mile west of the First Old River bridge. The simulated hypothetical natural conditions involved removing the U.S. Highway 79 and the Union Pacific Railroad embankments along the entire length of the flood plain. The primary purpose of model simulations for natural conditions was to calculate backwater data for the existing and proposed conditions. The four simulated hypothetical proposed alternatives involved a 1.8-mile White River bridge located on the east side of the study area near Clarendon, Arkansas, either a 1,400-foot relief bridge (Alternative 1) or a 1,545 relief bridge (Alternatives 2-4) located 0.25 mile west of the White River bridge opening, and three different Roc Roe Bayou bridge openings ranging from 1,540-3,475 feet in length located 0.9 mile west of the relief bridge (Alternatives 1-4). Simulation of the 5-year floodflow for the existing bridge openings indicates that about 57 percent (65,600 cubic feet per second) of flow was conveyed by the White River bridge, about 26 percent (29,900 cubic feet per second) by the First Old River bridge, and about 17 percent (19,600 cubic feet per second) by the Roc Roe Bayou bridge. Maximum depth-averaged point velocities for the White River, First Old River, and Roc Roe Bayou bridges were 3.6, 1.6, and 3.3 feet per second, respectively. For the 100-year floodflow, the simulation indicates that about 56 percent (123,100 cubic feet per second) of flow was conveyed by the White River bridge, about 26 percent (56,200 cubic feet per second) by the First Old River bridge, and about 19 percent (41,000 cubic feet per second) by the Roc Roe Bayou bridge. The maximum depth-averaged point velocities for the White River, First Old River, and Roc Roe Bayou bridges were 4.2, 2.2, and 4.1 feet per second, respectively. Simulation of the 5-year floodflow for the proposed U.S. Highway 79 alignment alternatives indicates that 76-78 percent (87,100-89,900 cubic feet per second) of the flow was conveyed by the proposed White River bridge, 6-7 percent (7,000-7,500 cubic feet per second) by the proposed relief bridge, and 13-16 percent (14,600-18,600 cubic feet per second) by the proposed Roc Roe Bayou bridge. For the 100-year floodflow, simulations predicted that 70-72 percent (151,200-155,600 cubic feet per second) of the flow was conveyed by the proposed White River bridge, 9-10 percent (19,800-20,700 cubic feet per second) by the proposed relief bridge, and 14-20 percent (30,700-43,000 cubic feet per second) by the proposed Roc Roe Bayou bridge.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Bohannon, Robert G.
2006-01-01
This map was produced from several larger digital datasets. Topography was derived from Shuttle Radar Topography Mission (SRTM) 85-meter digital data. Gaps in the original dataset were filled with data digitized from contours on 1:200,000-scale Soviet General Staff Sheets (1978-1997). Contours were generated by cubic convolution averaged over four pixels using TNTmips surface-modeling capabilities. Minor artifacts resulting from the auto-contouring technique are present. Streams were auto-generated from the SRTM data in TNTmips as flow paths. Flow paths were limited in number by their Horton value on a quadrangle-by-quadrangle basis. Peak elevations were averaged over an area measuring 85 m by 85 m (represented by one pixel), and they are slightly lower than the highest corresponding point on the ground. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Because cultural features were not derived from the SRTM base, they do not match it precisely. Province boundaries are not exactly located. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The open-file report (OFR) numbers for each quadrangle range in sequence from 1092 - 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Early Yields Of Slash Pine Planted On a Cutover Site At Various Spacings
W.F. Mann
1971-01-01
Tabulates basal areas, cordwood and cubic-foot volumes, average d.b.h., and diameter distributions for 14-year-old slash pine planted in central Louisiana. Also gives regression equations developed to predict these parameters.
NASA Astrophysics Data System (ADS)
Abozeed, Amina A.; Kadono, Toshiharu; Sekiyama, Akira; Fujiwara, Hidenori; Higashiya, Atsushi; Yamasaki, Atsushi; Kanai, Yuina; Yamagami, Kohei; Tamasaku, Kenji; Yabashi, Makina; Ishikawa, Tetsuya; Andreev, Alexander V.; Wada, Hirofumi; Imada, Shin
2018-03-01
We developed a method to experimentally quantify the fourth-order multipole moment of the rare-earth 4f orbital. Linear dichroism (LD) in the Er 3d5/2 core-level photoemission spectra of cubic ErCo2 was measured using bulk-sensitive hard X-ray photoemission spectroscopy. Theoretical calculation reproduced the observed LD, and the result showed that the observed result does not contradict the suggested Γ 83 ground state. Theoretical calculation further showed a linear relationship between the LD size and the size of the fourth-order multipole moment of the Er3+ ion, which is proportional to the expectation value < O40 + 5O44> , where Onm are the Stevens operators. These analyses indicate that the LD in 3d photoemission spectra can be used to quantify the average fourth-order multipole moment of rare-earth atoms in a cubic crystal electric field.
NASA Technical Reports Server (NTRS)
Pavlov, Alexander A.
2011-01-01
In its motion through the Milky Way galaxy, the solar system encounters an average density (>=330 H atoms/cubic cm) giant molecular cloud (GMC) approximately every 108 years, a dense (approx 2 x 103 H atoms/cubic cm) GMC every approx 109 years and will inevitably encounter them in the future. However, there have been no studies linking such events with severe (snowball) glaciations in Earth history. Here we show that dramatic climate change can be caused by interstellar dust accumulating in Earth's atmosphere during the solar system's immersion into a dense (approx ,2 x 103 H atoms/cubic cm) GMC. The stratospheric dust layer from such interstellar particles could provide enough radiative forcing to trigger the runaway ice-albedo feedback that results in global snowball glaciations. We also demonstrate that more frequent collisions with less dense GMCs could cause moderate ice ages.
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Potential fault region detection in TFDS images based on convolutional neural network
NASA Astrophysics Data System (ADS)
Sun, Junhua; Xiao, Zhongwen
2016-10-01
In recent years, more than 300 sets of Trouble of Running Freight Train Detection System (TFDS) have been installed on railway to monitor the safety of running freight trains in China. However, TFDS is simply responsible for capturing, transmitting, and storing images, and fails to recognize faults automatically due to some difficulties such as such as the diversity and complexity of faults and some low quality images. To improve the performance of automatic fault recognition, it is of great importance to locate the potential fault areas. In this paper, we first introduce a convolutional neural network (CNN) model to TFDS and propose a potential fault region detection system (PFRDS) for simultaneously detecting four typical types of potential fault regions (PFRs). The experimental results show that this system has a higher performance of image detection to PFRs in TFDS. An average detection recall of 98.95% and precision of 100% are obtained, demonstrating the high detection ability and robustness against various poor imaging situations.
NASA Astrophysics Data System (ADS)
Chen, Zhongjing; Zhang, Xing; Pu, Yudong; Yan, Ji; Huang, Tianxuan; Jiang, Wei; Yu, Bo; Chen, Bolun; Tang, Qi; Song, Zifeng; Chen, Jiabin; Zhan, Xiayu; Liu, Zhongjie; Xie, Xufei; Jiang, Shaoen; Liu, Shenye
2018-02-01
The accuracy of the determination of the burn-averaged ion temperature of inertial confinement fusion implosions depends on the unfold process, including deconvolution and convolution methods, and the function, i.e., the detector response, used to fit the signals measured by neutron time-of-flight (nToF) detectors. The function given by Murphy et al. [Rev. Sci. Instrum. 68(1), 610-613 (1997)] has been widely used in Nova, Omega, and NIF. There are two components, i.e., fast and slow, and the contribution of scattered neutrons has not been dedicatedly considered. In this work, a new function, based on Murphy's function has been employed to unfold nToF signals. The contribution of scattered neutrons is easily included by the convolution of a Gaussian response function and an exponential decay. The ion temperature is measured by nToF with the new function. Good agreement with the ion temperature determined by the deconvolution method has been achieved.
Cubic phase stability, optical and magnetic properties of Cu-stabilized zirconia nanocrystals
NASA Astrophysics Data System (ADS)
Pramanik, Prativa; Singh, Sobhit; Joshi, Deep Chandra; Mallick, Ayan; Pisane, Kelly; Romero, Aldo H.; Thota, Subhash; Seehra, M. S.
2018-06-01
By means of experimental and ab initio investigations, we report on the cubic phase stability of Cu doped zirconia (ZrO2) at room temperature, and further characterize its structural, optical and magnetic properties. Various compositions of Zr1‑x Cu x O2 (0.01 ⩽ x ⩽ 0.25) nanocrystallites of average size ∼16 nm were synthesized using co-precipitation technique. Thermal analysis and kinetics of crystallization revealed that the cubic phase at ambient temperature can be stabilized by using a critical calcination temperature of 500 °C for 8 h in air and a critical composition of . For x < x c , some undigested monoclinic phase of ZrO2 exists together with the cubic structure. However, for x > x c , the monoclinic CuO emerges as a secondary phase with shrinkage of unit-cell volume with increasing the Cu content. At x = 0.05 and 500 °C calcination temperature, we observe a high degree of cubic crystallinity which breaks down into monoclinic phase with increasing calcination temperature beyond 550 °C. Electron magnetic resonance studies provide evidence for the substitution of Cu2+ (2D5/9,3d9) ions at Zr4+ sites with g, g and average g a = ( + 2)/3 ∼ 2.1. The temperature dependence of magnetic susceptibility measurements from 2 K to 300 K exhibits Curie–Weiss behaviour whose analysis using g a = 2.1 and spin S = 1/2 yields x = 0.028 and x = 0.068 for the nominal x = 0.05 and x = 0.20 samples, respectively. This magnetic analysis confirms the findings from x-ray diffraction that only a part of Cu is successfully doped into cubic phase of Cu-doped ZrO2. The optical bandgap decreases with increasing x, which is due to the emergence of Cu-d states at Fermi-level near the valence bands, thus making Cu-doped zirconia a hole doped (p-type) semiconductor.
Variability of Fram Strait Ice Flux and North Atlantic Oscillation
NASA Technical Reports Server (NTRS)
Kwok, Ron
1999-01-01
An important term in the mass balance of the Arctic Ocean sea ice is the ice export. We estimated the winter sea ice export through the Fram Strait using ice motion from satellite passive microwave data and ice thickness data from moored upward looking sonars. The average winter area flux over the 18-year record (1978-1996) is 670,000 square km, approximately 7% of the area of the Arctic Ocean. The winter area flux ranges from a minimum of 450,000 sq. km in 1984 to a maximum of 906,000 sq km in 1995. The daily, monthly and interannual variabilities of the ice area flux are high. There is an upward trend in the ice area flux over the 18-year record. The average winter volume flux over the winters of October 1990 through May 1995 is 1745 cubic km ranging from a low of 1375 cubic km in 1990 to a high of 2791 cubic km in 1994. The sea-level pressure gradient across the Fram Strait explains more than 80% of the variance in the ice flux over the 18-year record. We use the coefficients from the regression of the time-series of area flux versus pressure gradient across the Fram Strait and ice thickness data to estimate the summer area and volume flux. The average 12-month area flux and volume flux are 919,000 sq km and 2366 cubic km. We find a significant correlation (R =0.86) between the area flux and positive phases of the North Atlantic Oscillation (NAO) index over the months of December through March. Correlation between our six years of volume flux estimates and the NAO index gives R =0.56. During the high NAO years, a more intense Icelandic low increases the gradient in the sea-level pressure by almost 1 mbar across the Fram Strait thus increasing the atmospheric forcing on ice transport. Correlation is reduced during the negative NAO years because of decreased dominance of this large-scale atmospheric pattern on the sea-level pressure gradient across the Fram Strait. Additional information is contained in the original.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
Serang, Oliver
2015-08-01
Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.
1976-01-01
The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.
Control system of hexacopter using color histogram footprint and convolutional neural network
NASA Astrophysics Data System (ADS)
Ruliputra, R. N.; Darma, S.
2017-07-01
The development of unmanned aerial vehicles (UAV) has been growing rapidly in recent years. The use of logic thinking which is implemented into the program algorithms is needed to make a smart system. By using visual input from a camera, UAV is able to fly autonomously by detecting a target. However, some weaknesses arose as usage in the outdoor environment might change the target's color intensity. Color histogram footprint overcomes the problem because it divides color intensity into separate bins that make the detection tolerant to the slight change of color intensity. Template matching compare its detection result with a template of the reference image to determine the target position and use it to position the vehicle in the middle of the target with visual feedback control based on Proportional-Integral-Derivative (PID) controller. Color histogram footprint method localizes the target by calculating the back projection of its histogram. It has an average success rate of 77 % from a distance of 1 meter. It can position itself in the middle of the target by using visual feedback control with an average positioning time of 73 seconds. After the hexacopter is in the middle of the target, Convolutional Neural Networks (CNN) classifies a number contained in the target image to determine a task depending on the classified number, either landing, yawing, or return to launch. The recognition result shows an optimum success rate of 99.2 %.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
..., 60-foot-wide powerhouse to contain two turbine/ generating units with a total installed capacity of 12 megawatts, with a hydraulic capacity of 90 cubic feet per second, and an average hydraulic head of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-09
..., 60-foot-wide powerhouse to contain two turbine/ generating units with a total installed capacity of 12 megawatts, with a hydraulic capacity of 90 cubic feet per second, and an average hydraulic head of...
Lewelling, B.R.
1997-01-01
A baseline study of the 241-square-mile Horse Creek basin was undertaken from October 1992 to February 1995 to assess the hydrologic and water-quality conditions of one of the last remaining undeveloped basins in west-central Florida. During the period of the study, much of the basin remained in a natural state, except for limited areas of cattle and citrus production and phosphate mining. Rainfall in 1993 and 1994 in the Horse Creek basin was 8 and 31 percent, respectively, above the 30-year long-term average. The lowest and highest maximum instantaneous peak discharge of the six daily discharge stations occurred at the Buzzard Roost Branch and the Horse Creek near Arcadia stations with 185 to 4,180 cubic feet per second, respectively. The Horse Creek near Arcadia station had the lowest number of no-flow days with zero days and the Brushy Creek station had the highest number with 113 days. During the study, the West Fork Horse Creek subbasin had the highest daily mean discharge per square mile with 30.6 cubic feet per second per square mile, and the largest runoff coefficient of 43.7 percent. The Buzzard Roost Branch subbasin had the lowest daily mean discharge per square mile with 5.05 cubic feet per second per square mile, and Brushy Creek and Brandy Branch shared the lowest runoff coefficient of 0.6 percent. Brandy Branch had the highest monthly mean runoff in both 1993 and 1994 with 11.48 and 19.28 inches, respectively. During the high-baseflow seepage run, seepage gains were 8.87 cubic feet per second along the 43-mile Horse Creek channel. However, during the low-baseflow seepage run, seepage losses were 0.88 cubic foot per second. Three methods were used to estimate average annual ground-water recharge in the Horse Creek basin: (1) well hydrograph, (2) chloride mass balance, and (3) streamflow hydrograph. Estimated average annual recharge using these three methods ranged from 3.6 to 8.7 inches. The high percentage of carbonate plus bicarbonate analyzed at the Carlton surficial aquifer well could indicate an upward ground-water flow from the underlying intermediate aquifer system. Based on constituent concentrations in water samples from the six daily discharge stations, concentrations generally are lower in the upper three subbasins, West Fork Horse Creek, Upper Horse Creek, and Brushy Creek than in the lower three subbasins. Typically, concentrations were highest for major ions at Buzzard Roost Branch and nutrients at Brushy Creek.
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
James O. Howard
1971-01-01
A study conducted during 1969-70 in Oregon, Washington, and California indicates that the average net volume of logging residues ranged from 325 to 3,156 cubic feet per acre. The highest volume was on National Forests in the Douglas-fir region, which averaged 2.5 times greater than private lands. The lowest volumes of residue were found in the ponderosa pine region...
CNN-BLPred: a Convolutional neural network based predictor for β-Lactamases (BL) and their classes.
White, Clarence; Ismail, Hamid D; Saigo, Hiroto; Kc, Dukka B
2017-12-28
The β-Lactamase (BL) enzyme family is an important class of enzymes that plays a key role in bacterial resistance to antibiotics. As the newly identified number of BL enzymes is increasing daily, it is imperative to develop a computational tool to classify the newly identified BL enzymes into one of its classes. There are two types of classification of BL enzymes: Molecular Classification and Functional Classification. Existing computational methods only address Molecular Classification and the performance of these existing methods is unsatisfactory. We addressed the unsatisfactory performance of the existing methods by implementing a Deep Learning approach called Convolutional Neural Network (CNN). We developed CNN-BLPred, an approach for the classification of BL proteins. The CNN-BLPred uses Gradient Boosted Feature Selection (GBFS) in order to select the ideal feature set for each BL classification. Based on the rigorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets, CCN-BLPred performed better than the other existing algorithms. Compared with other architectures of CNN, Recurrent Neural Network, and Random Forest, the simple CNN architecture with only one convolutional layer performs the best. After feature extraction, we were able to remove ~95% of the 10,912 features using Gradient Boosted Trees. During 10-fold cross validation, we increased the accuracy of the classic BL predictions by 7%. We also increased the accuracy of Class A, Class B, Class C, and Class D performance by an average of 25.64%. The independent test results followed a similar trend. We implemented a deep learning algorithm known as Convolutional Neural Network (CNN) to develop a classifier for BL classification. Combined with feature selection on an exhaustive feature set and using balancing method such as Random Oversampling (ROS), Random Undersampling (RUS) and Synthetic Minority Oversampling Technique (SMOTE), CNN-BLPred performs significantly better than existing algorithms for BL classification.
Coronary artery calcification (CAC) classification with deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Liu, Xiuming; Wang, Shice; Deng, Yufeng; Chen, Kuan
2017-03-01
Coronary artery calcification (CAC) is a typical marker of the coronary artery disease, which is one of the biggest causes of mortality in the U.S. This study evaluates the feasibility of using a deep convolutional neural network (DCNN) to automatically detect CAC in X-ray images. 1768 posteroanterior (PA) view chest X-Ray images from Sichuan Province Peoples Hospital, China were collected retrospectively. Each image is associated with a corresponding diagnostic report written by a trained radiologist (907 normal, 861 diagnosed with CAC). Onequarter of the images were randomly selected as test samples; the rest were used as training samples. DCNN models consisting of 2,4,6 and 8 convolutional layers were designed using blocks of pre-designed CNN layers. Each block was implemented in Theano with Graphics Processing Units (GPU). Human-in-the-loop learning was also performed on a subset of 165 images with framed arteries by trained physicians. The results from the DCNN models were compared to the diagnostic reports. The average diagnostic accuracies for models with 2,4,6,8 layers were 0.85, 0.87, 0.88, and 0.89 respectively. The areas under the curve (AUC) were 0.92, 0.95, 0.95, and 0.96. As the model grows deeper, the AUC or diagnostic accuracies did not have statistically significant changes. The results of this study indicate that DCNN models have promising potential in the field of intelligent medical image diagnosis practice.
Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks
Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni
2015-01-01
Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298
Fifty-year flood-inundation maps for Choluteca, Honduras
Kresch, David L.; Mastin, Mark C.; Olsen, T.D.
2002-01-01
After the devastating floods caused by Hurricane Mitch in 1998, maps of the areas and depths of 50-year-flood inundation at 15 municipalities in Honduras were prepared as a tool for agencies involved in reconstruction and planning. This report, which is one in a series of 15, presents maps of areas in the municipality of Choluteca that would be inundated by 50-year floods on Rio Choluteca and Rio Iztoca. Geographic Information System (GIS) coverages of the flood inundation are available on a computer in the municipality of Choluteca as part of the Municipal GIS project and on the Internet at the Flood Hazard Mapping Web page (http://mitchnts1.cr.usgs.gov/projects/floodhazard.html). These coverages allow users to view the flood inundation in much more detail than is possible using the maps in this report. Water-surface elevations for 50-year-floods on Rio Choluteca and Rio Iztoca at Choluteca were estimated using HEC-RAS, a one-dimensional, steady-flow, step-backwater computer program. The channel and floodplain cross sections used in HEC-RAS were developed from an airborne light-detection-and-ranging (LIDAR) topographic survey of the area. The estimated 50-year-flood discharge for Rio Choluteca at Choluteca is 4,620 cubic meters per second, which is the drainage-area-adjusted weighted-average of two independently estimated 50-year-flood discharges for the gaging station Rio Choluteca en Puente Choluteca. One discharge, 4,913 cubic meters per second, was estimated from a frequency analysis of the 17 years of peak discharge record for the gage, and the other, 2,650 cubic meters per second, was estimated from a regression equation that relates the 50-year-flood discharge to drainage area and mean annual precipitation. The weighted-average of the two discharges at the gage is 4,530 cubic meters per second. The 50-year-flood discharge for the study area reach of Rio Choluteca was estimated by multiplying the weighted discharge at the gage by the ratio of the drainage areas upstream from the two locations. The 50-year-flood discharge for Rio Iztoca, which was estimated from the regression equation, is 430 cubic meters per second.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Hunter, Craig A.
1999-01-01
An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.
Experimental study of current loss and plasma formation in the Z machine post-hole convolute
NASA Astrophysics Data System (ADS)
Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.
2017-01-01
The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.
Quantifying Libya-4 Surface Reflectance Heterogeneity With WorldView-1, 2 and EO-1 Hyperion
NASA Technical Reports Server (NTRS)
Neigh, Christopher S. R.; McCorkel, Joel; Middleton, Elizabeth M.
2015-01-01
The land surface imaging (LSI) virtual constellation approach promotes the concept of increasing Earth observations from multiple but disparate satellites. We evaluated this through spectral and spatial domains, by comparing surface reflectance from 30-m Hyperion and 2-m resolution WorldView-2 (WV-2) data in the Libya-4 pseudoinvariant calibration site. We convolved and resampled Hyperion to WV-2 bands using both cubic convolution and nearest neighbor (NN) interpolation. Additionally, WV-2 and WV-1 same-date imagery were processed as a cross-track stereo pair to generate a digital terrain model to evaluate the effects from large (>70 m) linear dunes. Agreement was moderate to low on dune peaks between WV-2 and Hyperion (R2 <; 0.4) but higher in areas of lower elevation and slope (R2 > 0.6). Our results provide a satellite sensor intercomparison protocol for an LSI virtual constellation at high spatial resolution, which should start with geolocation of pixels, followed by NN interpolation to avoid tall dunes that enhance surface reflectance differences across this internationally utilized site.
Men, Kuo; Dai, Jianrong; Li, Yexiong
2017-12-01
Delineation of the clinical target volume (CTV) and organs at risk (OARs) is very important for radiotherapy but is time-consuming and prone to inter-observer variation. Here, we proposed a novel deep dilated convolutional neural network (DDCNN)-based method for fast and consistent auto-segmentation of these structures. Our DDCNN method was an end-to-end architecture enabling fast training and testing. Specifically, it employed a novel multiple-scale convolutional architecture to extract multiple-scale context features in the early layers, which contain the original information on fine texture and boundaries and which are very useful for accurate auto-segmentation. In addition, it enlarged the receptive fields of dilated convolutions at the end of networks to capture complementary context features. Then, it replaced the fully connected layers with fully convolutional layers to achieve pixel-wise segmentation. We used data from 278 patients with rectal cancer for evaluation. The CTV and OARs were delineated and validated by senior radiation oncologists in the planning computed tomography (CT) images. A total of 218 patients chosen randomly were used for training, and the remaining 60 for validation. The Dice similarity coefficient (DSC) was used to measure segmentation accuracy. Performance was evaluated on segmentation of the CTV and OARs. In addition, the performance of DDCNN was compared with that of U-Net. The proposed DDCNN method outperformed the U-Net for all segmentations, and the average DSC value of DDCNN was 3.8% higher than that of U-Net. Mean DSC values of DDCNN were 87.7% for the CTV, 93.4% for the bladder, 92.1% for the left femoral head, 92.3% for the right femoral head, 65.3% for the intestine, and 61.8% for the colon. The test time was 45 s per patient for segmentation of all the CTV, bladder, left and right femoral heads, colon, and intestine. We also assessed our approaches and results with those in the literature: our system showed superior performance and faster speed. These data suggest that DDCNN can be used to segment the CTV and OARs accurately and efficiently. It was invariant to the body size, body shape, and age of the patients. DDCNN could improve the consistency of contouring and streamline radiotherapy workflows. © 2017 American Association of Physicists in Medicine.
Composite operators in cubic field theories and link-overlap fluctuations in spin-glass models
NASA Astrophysics Data System (ADS)
Altieri, Ada; Parisi, Giorgio; Rizzo, Tommaso
2016-01-01
We present a complete characterization of the fluctuations and correlations of the squared overlap in the Edwards-Anderson spin-glass model in zero field. The analysis reveals that the energy-energy correlation (and thus the specific heat) has a different critical behavior than the fluctuations of the link overlap in spite of the fact that the average energy and average link overlap have the same critical properties. More precisely the link-overlap fluctuations are larger than the specific heat according to a computation at first order in the 6 -ɛ expansion. An unexpected outcome is that the link-overlap fluctuations have a subdominant power-law contribution characterized by an anomalous logarithmic prefactor which is absent in the specific heat. In order to compute the ɛ expansion we consider the problem of the renormalization of quadratic composite operators in a generic multicomponent cubic field theory: the results obtained have a range of applicability beyond spin-glass theory.
Pauling, Linus
1989-01-01
Consideration of the relation between bond length and bond number and the average atomic volume for different ways of packing atoms leads to the conclusion that the average ligancy of atoms in a metal should increase when a phase change occurs on increasing the pressure. Minimum volume for each value of the ligancy results from triangular coordination polyhedra (with triangular faces), such as the icosahedron and the Friauf polyhedron. Electron transfer may permit atoms of an element to assume different ligancies. Application of these principles to Cs(IV) and Cs(V), which were previously assigned structures with ligancy 8 and 6, respectively, has led to the assignment to Cs(IV) of a primitive cubic unit cell with a = 16.11 Å and with about 122 atoms in the cube and to Cs(V) of a primitive cubic unit cell resembling that of Mg32(Al,Zn)49, with a = 16.97 Å and with 162 atoms in the cube. PMID:16578839
2015-12-15
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep
Witoonchart, Peerajak; Chongstitvatana, Prabhas
2017-08-01
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing
2017-03-01
Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).
ERIC Educational Resources Information Center
Umar, A.; Yusau, B.; Ghandi, B. M.
2007-01-01
In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.
A fast complex integer convolution using a hybrid transform
NASA Technical Reports Server (NTRS)
Reed, I. S.; K Truong, T.
1978-01-01
It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
2008-09-01
Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and
Soukup, W.G.; Gillies, D.C.; Myette, C.F.
1984-01-01
In the Cyrus-Benson area/ model results indicate that tinder 1980 development and average area! recharge/ dynamic equilibrium would be reached in less than 4 years and additional drawdown would be less than 2 feet. A 3-year drought coupled with increased pumping from irrigation wells operated during 1980 would lower water levels as much as 6 feet and reduce flow in the Chippewa River by about 26 cubic feet per second. At maximum hypothetical development in terms of the number of wells and normal area! recharge/ water levels would be lowered as much as 9 feet and streamflow would be reduced about 12 cubic feet per second. At maximum hypothetical development/ drought conditions and increased pumping would lower water levels as much as 12 feet and reduce flow in the Chippewa River by about 30 cubic feet per second/ which equals about 75 percent of available streamflow at the 70-percent flow duration.
The Control Based on Internal Average Kinetic Energy in Complex Environment for Multi-robot System
NASA Astrophysics Data System (ADS)
Yang, Mao; Tian, Yantao; Yin, Xianghua
In this paper, reference trajectory is designed according to minimum energy consumed for multi-robot system, which nonlinear programming and cubic spline interpolation are adopted. The control strategy is composed of two levels, which lower-level is simple PD control and the upper-level is based on the internal average kinetic energy for multi-robot system in the complex environment with velocity damping. Simulation tests verify the effectiveness of this control strategy.
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Convolution of large 3D images on GPU and its decomposition
NASA Astrophysics Data System (ADS)
Karas, Pavel; Svoboda, David
2011-12-01
In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.
Origin of Noncubic Scaling Law in Disordered Granular Packing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Chengjie; Li, Jindong; Kou, Binquan
Recent diffraction experiments on metallic glasses have unveiled an unexpected non-cubic scaling law between density and average interatomic distance, which lead to the speculations on the presence of fractal glass order. Using X-ray tomography we identify here a similar non-cubic scaling law in disordered granular packing of spherical particles. We find that the scaling law is directly related to the contact neighbors within first nearest neighbor shell, and therefore is closely connected to the phenomenon of jamming. The seemingly universal scaling exponent around 2.5 arises due to the isostatic condition with contact number around 6, and we argue that themore » exponent should not be universal.« less
Application of the algebraic RNG model for transition simulation. [renormalization group theory
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1990-01-01
The algebraic form of the RNG model of Yakhot and Orszag (1986) is investigated as a transition model for the Reynolds averaged boundary layer equations. It is found that the cubic equation for the eddy viscosity contains both a jump discontinuity and one spurious root. A yet unpublished transformation to a quartic equation is shown to remove the numerical difficulties associated with the discontinuity, but only at the expense of merging both the physical and spurious root of the cubic. Jumps between the branches of the resulting multiple-valued solution are found to lead to oscillations in flat plate transition calculations. Aside from the oscillations, the transition behavior is qualitatively correct.
Large-eddy simulation of plume dispersion within regular arrays of cubic buildings
NASA Astrophysics Data System (ADS)
Nakayama, H.; Jurcakova, K.; Nagai, H.
2011-04-01
There is a potential problem that hazardous and flammable materials are accidentally or intentionally released within populated urban areas. For the assessment of human health hazard from toxic substances, the existence of high concentration peaks in a plume should be considered. For the safety analysis of flammable gas, certain critical threshold levels should be evaluated. Therefore, in such a situation, not only average levels but also instantaneous magnitudes of concentration should be accurately predicted. In this study, we perform Large-Eddy Simulation (LES) of plume dispersion within regular arrays of cubic buildings with large obstacle densities and investigate the influence of the building arrangement on the characteristics of mean and fluctuating concentrations.
Availability of Ground-water in Marion County, Indiana
Meyer, William R.; Reussow, J.P.; Gillies, D.C.; Shampine, W.J.
1975-01-01
A series of model experiments were carried out to test the capacity of the aquifers to sustain increases in pumpage. In all of these, a new equilibrium was established within 6 years of simulated pumpage. In two of these experiments, water levels in the discharging wells were allowed to draw down to approximately half of the saturated thickness of the glacial-outwash aquifer. At this drawdown limit, the total discharge of wells in the system was found to be 59 million gallons per day (2.6 cubic second) assuming that the streams were fully connected to the upper third of the glacial-outwash aquifer. In two other experiments, discharging wells were allowed to drawdown an average of two-thirds of the saturated thickness of the glacial-outwash aquifer. At this limit, the total discharge was found to be 72 million gallons per day (3.2 cubic metres per second) using the conservative stream-aquifer connection, and 103 million gallons per day (4.5 cubic metres per second) assuming a full connection. Some dewatering of the aquifer was associated with the 72 million gallons per day (3.2 cubic metres per second) discharge. In all experiments, the amount that could be pumped from the confined aquifers without disturbing existing domestic wells was found to be small.
Havlik, M.E.; Marking, L.L.
1980-01-01
The Prairie du Chien dredge material site contains about 100,000 cubic meters of material dredged from the East Channel of the Mississippi Riverin1976. Previous studies in that area suggested a rich molluscan fauna, but most studies were only qualitative or simply observations. Our study of this material was designed to determine the density and diversity of molluscan fauna, to assess changes in the fauna, to identify endemic species previously unreported, and to evaluate the status of the endangered Lampsilis higginsi. Ten cubic meters of dredge material were sieved to recover shells. Molluscan fauna at the site contained38 species of naiades and up to 1,737 identifiable valves per cubic meter. The endangered L. higginsi ranked18th In occurrence, accounted for only 0.52% of the identifiable shells, and averaged about three valves per cubic meter. From a total of 813 kg of naiades and gastropods, 6,339 naiad valves were identified. Five naiad species were collected at the site for the first time, and Eploblasma triquetra had not been reported previously in the Prairie du Chien area. Although the molluscan fauna has changed, the East Channel at Prairie du Chien is obviously suitable for L. higginsi.
CURRENT FLOW DATA FOR SELECTED USGS STREAM MONITORING STATIONS
This data set contains recent and historical stream flow data for USGS stations. Flow data (cubic feet per second) are available for the most recent 5-6 day period and are compared with long-term average values. Flow data were collected approximately hourly. Flood stage and the m...
CURRENT FLOW DATA FOR SELECTED USGS STREAM MONITORING STATIONS IN WASHINGTON STATE
This data set contains recent stream flow data for USGS stations in Washington State. Flow data (cubic feet per second) are available for the most recent 5-6 day period and are compared with long-term average values. Flow data were collected approximately hourly. Flood stage and ...
Development and application of deep convolutional neural network in target detection
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Wang, Chunping; Fu, Qiang
2018-04-01
With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.
Barker, Rene A.; Braun, Christopher L.
2000-01-01
In June 1993, the Department of the Navy, Southern Division Naval Facilities Engineering Command (SOUTHDIV), began a Resource Conservation and Recovery Act (RCRA) Facility Investigation (RFI) of the Naval Weapons Industrial Reserve Plant (NWIRP) in north-central Texas. The RFI has found trichloroethene, dichloroethene, vinyl chloride, as well as chromium, lead, and other metallic residuum in the shallow alluvial aquifer underlying NWIRP. These findings and the possibility of on-site or off-site migration of contaminants prompted the need for a ground-water-flow model of the NWIRP area. The resulting U.S. Geological Survey (USGS) model: (1) defines aquifer properties, (2) computes water budgets, (3) delineates major flowpaths, and (4) simulates hydrologic effects of remediation activity. In addition to assisting with particle-tracking analyses, the calibrated model could support solute-transport modeling as well as help evaluate the effects of potential corrective action. The USGS model simulates steadystate and transient conditions of ground-water flow within a single model layer.The alluvial aquifer is within fluvial terrace deposits of Pleistocene age, which unconformably overlie the relatively impermeable Eagle Ford Shale of Late Cretaceous age. Over small distances and short periods, finer grained parts of the aquifer are separated hydraulically; however, most of the aquifer is connected circuitously through randomly distributed coarser grained sediments. The top of the underlying Eagle Ford Shale, a regional confining unit, is assumed to be the effective lower limit of ground-water circulation and chemical contamination.The calibrated steady-state model reproduces long-term average water levels within +5.1 or –3.5 feet of those observed; the standard error of the estimate is 1.07 feet with a mean residual of 0.02 foot. Hydraulic conductivity values range from 0.75 to 7.5 feet per day, and average about 4 feet per day. Specific yield values range from 0.005 to 0.15 and average about 0.08. Simulated infiltration rates range from 0 to 2.5 inches per year, depending mostly on local patterns of ground cover.Computer simulation indicates that, as of December 31, 1998, remediation systems at NWIRP were removing 7,375 cubic feet of water per day from the alluvial aquifer, with 3,050 cubic feet per day coming from aquifer storage. The resulting drawdown prevented 1,800 cubic feet per day of ground water from discharging into Cottonwood Bay, as well as inducing another 1,325 cubic feet per day into the aquifer from the bay. An additional 1,200 cubic feet of water per day (compared to pre-remediation conditions) was prevented from discharging into the west lagoon, east lagoon, Mountain Creek Lake, and Mountain Creek swale.Particle-tracking simulations, assuming an aquifer porosity of 0.15, were made to delineate flowpath patterns, or contaminant “capture zones,” resulting from 2.5- and 5-year periods of remediation activity at NWIRP. The resulting flowlines indicate three such zones, or areas from which ground water is simulated to have been removed during July 1996–December 1998, as well as extended areas from which ground water would be removed during the next 2.5 years (January 1999– June 2001).Simulation indicates that, as of December 31, 1998, the recovery trench was intercepting about 827 cubic feet per day of ground water that—without the trench—would have discharged into Cottonwood Bay. During this time, the trench is simulated to have removed about 3,221 cubic feet per day of water from the aquifer, with about 934 cubic feet per day (29 percent) coming from the south (Cottonwood Bay) side of the trench.
Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification.
Sladojevic, Srdjan; Arsenovic, Marko; Anderla, Andras; Culibrk, Dubravko; Stefanovic, Darko
2016-01-01
The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.
Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification
Sladojevic, Srdjan; Arsenovic, Marko; Culibrk, Dubravko; Stefanovic, Darko
2016-01-01
The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%. PMID:27418923
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
Längkvist, Martin; Jendeberg, Johan; Thunberg, Per; Loutfi, Amy; Lidén, Mats
2018-06-01
Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra
2017-03-01
Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.
Wang, Shui-Hua; Phillips, Preetha; Sui, Yuxiu; Liu, Bin; Yang, Ming; Cheng, Hong
2018-03-26
Alzheimer's disease (AD) is a progressive brain disease. The goal of this study is to provide a new computer-vision based technique to detect it in an efficient way. The brain-imaging data of 98 AD patients and 98 healthy controls was collected using data augmentation method. Then, convolutional neural network (CNN) was used, CNN is the most successful tool in deep learning. An 8-layer CNN was created with optimal structure obtained by experiences. Three activation functions (AFs): sigmoid, rectified linear unit (ReLU), and leaky ReLU. The three pooling-functions were also tested: average pooling, max pooling, and stochastic pooling. The numerical experiments demonstrated that leaky ReLU and max pooling gave the greatest result in terms of performance. It achieved a sensitivity of 97.96%, a specificity of 97.35%, and an accuracy of 97.65%, respectively. In addition, the proposed approach was compared with eight state-of-the-art approaches. The method increased the classification accuracy by approximately 5% compared to state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Meijs, Midas; Manniesing, Rashindra
2018-02-01
Segmentation of the arteries and veins of the cerebral vasculature is important for improved visualization and for the detection of vascular related pathologies including arteriovenous malformations. We propose a 3D fully convolutional neural network (CNN) using a time-to-signal image as input and the distance to the center of gravity of the brain as spatial feature integrated in the final layers of the CNN. The method was trained and validated on 6 and tested on 4 4D CT patient imaging data. The reference standard was acquired by manual annotations by an experienced observer. Quantitative evaluation showed a mean Dice similarity coefficient of 0.94 +/- 0.03 and 0.97 +/- 0.01, a mean absolute volume difference of 4.36 +/- 5.47 % and 1.79 +/- 2.26 % for artery and vein respectively and an overall accuracy of 0.96 +/- 0.02. The average calculation time per volume on the test set was approximately one minute. Our method shows promising results and enables fast and accurate segmentation of arteries and veins in full 4D CT imaging data.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
Code of Federal Regulations, 2011 CFR
2011-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
NASA Astrophysics Data System (ADS)
Marcum, Richard A.; Davis, Curt H.; Scott, Grant J.; Nivin, Tyler W.
2017-10-01
We evaluated how deep convolutional neural networks (DCNN) could assist in the labor-intensive process of human visual searches for objects of interest in high-resolution imagery over large areas of the Earth's surface. Various DCNN were trained and tested using fewer than 100 positive training examples (China only) from a worldwide surface-to-air-missile (SAM) site dataset. A ResNet-101 DCNN achieved a 98.2% average accuracy for the China SAM site data. The ResNet-101 DCNN was used to process ˜19.6 M image chips over a large study area in southeastern China. DCNN chip detections (˜9300) were postprocessed with a spatial clustering algorithm to produce a ranked list of ˜2100 candidate SAM site locations. The combination of DCNN processing and spatial clustering effectively reduced the search area by ˜660X (0.15% of the DCNN-processed land area). An efficient web interface was used to facilitate a rapid serial human review of the candidate SAM sites in the China study area. Four novice imagery analysts with no prior imagery analysis experience were able to complete a DCNN-assisted SAM site search in an average time of ˜42 min. This search was ˜81X faster than a traditional visual search over an equivalent land area of ˜88,640 km2 while achieving nearly identical statistical accuracy (˜90% F1).
Wang, Shuo; Zhou, Mu; Liu, Zaiyi; Liu, Zhenyu; Gu, Dongsheng; Zang, Yali; Dong, Di; Gevaert, Olivier; Tian, Jie
2017-08-01
Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%. Copyright © 2017. Published by Elsevier B.V.
Spotting L3 slice in CT scans using deep convolutional network and transfer learning.
Belharbi, Soufiane; Chatelain, Clément; Hérault, Romain; Adam, Sébastien; Thureau, Sébastien; Chastan, Mathieu; Modzelewski, Romain
2017-08-01
In this article, we present a complete automated system for spotting a particular slice in a complete 3D Computed Tomography exam (CT scan). Our approach does not require any assumptions on which part of the patient's body is covered by the scan. It relies on an original machine learning regression approach. Our models are learned using the transfer learning trick by exploiting deep architectures that have been pre-trained on imageNet database, and therefore it requires very little annotation for its training. The whole pipeline consists of three steps: i) conversion of the CT scans into Maximum Intensity Projection (MIP) images, ii) prediction from a Convolutional Neural Network (CNN) applied in a sliding window fashion over the MIP image, and iii) robust analysis of the prediction sequence to predict the height of the desired slice within the whole CT scan. Our approach is applied to the detection of the third lumbar vertebra (L3) slice that has been found to be representative to the whole body composition. Our system is evaluated on a database collected in our clinical center, containing 642 CT scans from different patients. We obtained an average localization error of 1.91±2.69 slices (less than 5 mm) in an average time of less than 2.5 s/CT scan, allowing integration of the proposed system into daily clinical routines. Copyright © 2017 Elsevier Ltd. All rights reserved.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2015-06-01
A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2014-10-01
A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Huang, Xia; Qian, Wei
2017-03-01
Deep learning is a trending promising method in medical image analysis area, but how to efficiently prepare the input image for the deep learning algorithms remains a challenge. In this paper, we introduced a novel artificial multichannel region of interest (ROI) generation procedure for convolutional neural networks (CNN). From LIDC database, we collected 54880 benign nodule samples and 59848 malignant nodule samples based on the radiologists' annotations. The proposed CNN consists of three pairs of convolutional layers and two fully connected layers. For each original ROI, two new ROIs were generated: one contains the segmented nodule which highlighted the nodule shape, and the other one contains the gradient of the original ROI which highlighted the textures. By combining the three channel images into a pseudo color ROI, the CNN was trained and tested on the new multichannel ROIs (multichannel ROI II). For the comparison, we generated another type of multichannel image by replacing the gradient image channel with a ROI contains whitened background region (multichannel ROI I). With the 5-fold cross validation evaluation method, the CNN using multichannel ROI II achieved the ROI based area under the curve (AUC) of 0.8823+/-0.0177, compared to the AUC of 0.8484+/-0.0204 generated by the original ROI. By calculating the average of ROI scores from one nodule, the lesion based AUC using multichannel ROI was 0.8793+/-0.0210. By comparing the convolved features maps from CNN using different types of ROIs, it can be noted that multichannel ROI II contains more accurate nodule shapes and surrounding textures.
Topology reduction in deep convolutional feature extraction networks
NASA Astrophysics Data System (ADS)
Wiatowski, Thomas; Grohs, Philipp; Bölcskei, Helmut
2017-08-01
Deep convolutional neural networks (CNNs) used in practice employ potentially hundreds of layers and 10,000s of nodes. Such network sizes entail significant computational complexity due to the large number of convolutions that need to be carried out; in addition, a large number of parameters needs to be learned and stored. Very deep and wide CNNs may therefore not be well suited to applications operating under severe resource constraints as is the case, e.g., in low-power embedded and mobile platforms. This paper aims at understanding the impact of CNN topology, specifically depth and width, on the network's feature extraction capabilities. We address this question for the class of scattering networks that employ either Weyl-Heisenberg filters or wavelets, the modulus non-linearity, and no pooling. The exponential feature map energy decay results in Wiatowski et al., 2017, are generalized to O(a-N), where an arbitrary decay factor a > 1 can be realized through suitable choice of the Weyl-Heisenberg prototype function or the mother wavelet. We then show how networks of fixed (possibly small) depth N can be designed to guarantee that ((1 - ɛ) · 100)% of the input signal's energy are contained in the feature vector. Based on the notion of operationally significant nodes, we characterize, partly rigorously and partly heuristically, the topology-reducing effects of (effectively) band-limited input signals, band-limited filters, and feature map symmetries. Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes - for fixed network depth N - the average number of operationally significant nodes per layer.
Mechanical properties of Fe rich Fe-Si alloys: ab initio local bulk-modulus viewpoint
NASA Astrophysics Data System (ADS)
Bhattacharya, Somesh Kr; Kohyama, Masanori; Tanaka, Shingo; Shiihara, Yoshinori; Saengdeejing, Arkapol; Chen, Ying; Mohri, Tetsuo
2017-11-01
Fe-rich Fe-Si alloys show peculiar bulk-modulus changes depending on the Si concentration in the range of 0-15 at.%Si. In order to clarify the origin of this phenomenon, we have performed density-functional theory calculations of supercells of Fe-Si alloy models with various Si concentrations. We have applied our recent techniques of ab initio local energy and local stress, by which we can obtain a local bulk modulus of each atom or atomic group as a local constituent of the cell-averaged bulk modulus. A2-phase alloy models are constructed by introducing Si substitution into bcc Fe as uniformly as possible so as to prevent mutual neighboring, while higher Si concentrations over 6.25 at.%Si lead to contacts between SiFe8 cubic clusters via sharing corner Fe atoms. For 12.5 at.%Si, in addition to an A2 model, we deal with partial D03 models containing local D03-like layers consisting of edge-shared SiFe8 cubic clusters. For the cell-averaged bulk modulus, we have successfully reproduced the Si-concentration dependence as a monotonic decrease until 11.11 at.%Si and a recovery at 12.5 at.%Si. The analysis of local bulk moduli of SiFe8 cubic clusters and Fe regions is effective to understand the variations of the cell-averaged bulk modulus. The local bulk moduli of Fe regions become lower for increasing Si concentration, due to the suppression of bulk-like d-d bonding states in narrow Fe regions. For higher Si concentrations till 11.11 at.%Si, corner-shared contacts or 1D chains of SiFe8 clusters lead to remarkable reduction of local bulk moduli of the clusters. At 12 at.%Si, on the other hand, two- or three-dimensional arrangements of corner- or edge-shared SiFe8 cubic clusters show greatly enhanced local bulk moduli, due to quite different bonding nature with much stronger p-d hybridization. The relation among the local bulk moduli, local electronic and magnetic structures, and local configurations such as connectivity of SiFe8 clusters and Fe-region sizes has been analyzed. The ab initio local stress has opened the way for obtaining accurate local elastic properties reflecting local valence-electron behaviors.
Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment
2011-02-01
code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise
A Video Transmission System for Severely Degraded Channels
2006-07-01
rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
Satellite image maps of Pakistan
,
1997-01-01
Georeferenced Landsat satellite image maps of Pakistan are now being made available for purchase from the U.S. Geological Survey (USGS). The first maps to be released are a series of Multi-Spectral Scanner (MSS) color image maps compiled from Landsat scenes taken before 1979. The Pakistan image maps were originally developed by USGS as an aid for geologic and general terrain mapping in support of the Coal Resource Exploration and Development Program in Pakistan (COALREAP). COALREAP, a cooperative program between the USGS, the United States Agency for International Development, and the Geological Survey of Pakistan, was in effect from 1985 through 1994. The Pakistan MSS image maps (bands 1, 2, and 4) are available as a full-country mosaic of 72 Landsat scenes at a scale of 1:2,000,000, and in 7 regional sheets covering various portions of the entire country at a scale of 1:500,000. The scenes used to compile the maps were selected from imagery available at the Eros Data Center (EDC), Sioux Falls, S. Dak. Where possible, preference was given to cloud-free and snow-free scenes that displayed similar stages of seasonal vegetation development. The data for the MSS scenes were resampled from the original 80-meter resolution to 50-meter picture elements (pixels) and digitally transformed to a geometrically corrected Lambert conformal conic projection. The cubic convolution algorithm was used during rotation and resampling. The 50-meter pixel size allows for such data to be imaged at a scale of 1:250,000 without degradation; for cost and convenience considerations, however, the maps were printed at 1:500,000 scale. The seven regional sheets have been named according to the main province or area covered. The 50-meter data were averaged to 150-meter pixels to generate the country image on a single sheet at 1:2,000,000 scale
Container-Grown Longleaf Pine Seedling Quality
Mark J. Hainds; James P. Barnett
2004-01-01
This study examines the comparative hardiness of various classes or grades of container-grown longleaf pine (Pinus palustris Mill.) seedlings. Most container longleaf seedlings are grown in small ribbed containers averaging 5 to 7 cubic inches in volume and 3 to 6 inches in depth. Great variability is often exhibited in typical lots of container-...
Integrated hydrologic model of Pajaro Valley, Santa Cruz and Monterey Counties, California
Hanson, Randall T.; Schmid, Wolfgang; Faunt, Claudia C.; Lear, Jonathan; Lockwood, Brian
2014-01-01
The HS-ASR was simulated for the years 2002–09, and replaced about about 1,290 acre-ft of coastal pumpage. This was combined with the simulation of additional 6,200 acre-ft of deliveries from supplemental wells, recycled water, and city connection deliveries through the CDS that also supplanted some coastal pumpage. Total simulated deliveries were 7,350 acre-ft of the 7,500 acre-ft of reported deliveries for the period 2002-09. The completed CDS should be capable of delivering about 8.8 million cubic meters (7,150 acre-ft) of water per year to coastal farms within the Pajaro Valley, if all the local supply components were fully available for this purpose. This would represent about 15 percent of the 48,300 acre-ft (59.6 million cubic meters) average agricultural pumpage for the period 2005 to 2009. Combined with the potential capture and reuse of some of the return flows and tile-drain flows, this could represent an almost 70 percent reduction of average overdraft for the entire valley and a large part of the coastal pumpage that induces seawater intrusion.
Investigation of LiF, Mg and Ti (TLD-100) Reproducibility
Sadeghi, M.; Sina, S.; Faghihi, R.
2015-01-01
LiF, Mg and Ti cubical TLD chips (known as TLD-100) are widely used for dosimetry purposes. The repeatability of TL dosimetry is investigated by exposing them to doses of (81, 162 and 40.5 mGy) with 662keV photons of Cs-137. A group of 40 cubical TLD chips was randomly selected from a batch and the values of Element Correction Coefficient (ECC) were obtained 4 times by irradiating them to doses of 81 mGy (two times), 162mGy and 40.5mGy. Results of this study indicate that the average reproducibility of ECC calculation for 40 TLDs is 1.5%, while these values for all chips do not exceed 5%. PMID:26688801
Investigation of LiF, Mg and Ti (TLD-100) Reproducibility.
Sadeghi, M; Sina, S; Faghihi, R
2015-12-01
LiF, Mg and Ti cubical TLD chips (known as TLD-100) are widely used for dosimetry purposes. The repeatability of TL dosimetry is investigated by exposing them to doses of (81, 162 and 40.5 mGy) with 662keV photons of Cs-137. A group of 40 cubical TLD chips was randomly selected from a batch and the values of Element Correction Coefficient (ECC) were obtained 4 times by irradiating them to doses of 81 mGy (two times), 162mGy and 40.5mGy. Results of this study indicate that the average reproducibility of ECC calculation for 40 TLDs is 1.5%, while these values for all chips do not exceed 5%.
Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
Mori, Shinichiro
2017-08-01
To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
Convolution Operation of Optical Information via Quantum Storage
NASA Astrophysics Data System (ADS)
Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan
2017-06-01
We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.
NASA Astrophysics Data System (ADS)
Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU.
Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU
Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109
Convoluted nozzle design for the RL10 derivative 2B engine
NASA Technical Reports Server (NTRS)
1985-01-01
The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.
Sim, K S; Teh, V; Tey, Y C; Kho, T K
2016-11-01
This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution
NASA Astrophysics Data System (ADS)
Zuo, B.; Hu, X.; Li, H.
2011-12-01
A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.
NASA Astrophysics Data System (ADS)
Lewtas, Joellen; Goto, Sumio; Williams, Katherine; Chuang, Jane C.; Petersen, Bruce A.; Wilson, Nancy K.
The mutagenicity of indoor air paniculate matter has been measured in a pilot field study of homes in Columbus, Ohio during the 1984 winter. The study was conducted in eight all natural-gas homes and two all electric homes. Paniculate matter and semi-volatile organic compounds were collected indoors using a medium volume sampler. A micro-forward mutation bioassay employing Salmonella typhimurium strain TM 677 was used to quantify the mutagenicity in solvent extracts of microgram quantities of indoor air particles. The mutagenicity was quantified in terms of both mutation frequency per mg of organic matter extracted and per cubic meter of air sampled. The combustion source variables explored in this study included woodburning in fireplaces and cigarette smoking. Homes in which cigarette smoking occurred had the highest concentrations of mutagenicity per cubic meter of air. The average indoor air mutagenicity per cubic meter was highly correlated with the number of cigarettes smoked. When the separate sampling periods in each room were compared, the mutagenicity in the kitchen samples was the most highly correlated with the number of cigarettes smoked.
Percolation of disordered jammed sphere packings
NASA Astrophysics Data System (ADS)
Ziff, Robert M.; Torquato, Salvatore
2017-02-01
We determine the site and bond percolation thresholds for a system of disordered jammed sphere packings in the maximally random jammed state, generated by the Torquato-Jiao algorithm. For the site threshold, which gives the fraction of conducting versus non-conducting spheres necessary for percolation, we find {{p}\\text{c}}=0.3116(3) , consistent with the 1979 value of Powell 0.310(5) and identical within errors to the threshold for the simple-cubic lattice, 0.311 608, which shares the same average coordination number of 6. In terms of the volume fraction ϕ, the threshold corresponds to a critical value {φ\\text{c}}=0.199 . For the bond threshold, which apparently was not measured before, we find {{p}\\text{c}}=0.2424(3) . To find these thresholds, we considered two shape-dependent universal ratios involving the size of the largest cluster, fluctuations in that size, and the second moment of the size distribution; we confirmed the ratios’ universality by also studying the simple-cubic lattice with a similar cubic boundary. The results are applicable to many problems including conductivity in random mixtures, glass formation, and drug loading in pharmaceutical tablets.
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization
2009-01-01
Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Eu3+-doped (Y0.5La0.5)2O3: new nanophosphor with the bixbyite cubic structure
NASA Astrophysics Data System (ADS)
Đorđević, Vesna; Nikolić, Marko G.; Bartova, Barbora; Krsmanović, Radenka M.; Antić, Željka; Dramićanin, Miroslav D.
2013-01-01
New red sesquioxide phosphor, Eu3+-doped (Y0.5La0.5)2O3, was synthesized in the form of nanocrystalline powder with excellent structural ordering in cubic bixbyite-type, and with nanoparticle sizes ranging between 10 and 20 nm. Photoluminescence measurements show strong, Eu3+ characteristic, red emission ( x = 0.66 and y = 0.34 CIE color coordinates) with an average 5D0 emission lifetime of about 1.3 ms. Maximum splitting of the 7F1 manifold of the Eu3+ ion emission behaves in a way directly proportional to the crystal field strength parameter, and experimental results show perfect agreement with theoretical values for pure cubic sesquioxides. This could be used as an indicator of complete dissolution of Y2O3 and La2O3, showing that (Y0.5La0.5)2O3:Eu3+ behaves as a new bixbyite structure oxide, M2O3, where M acts as an ion having average ionic radius of constituting Y3+ and La3+. Emission properties of this new phosphor were documented with detailed assignments of Eu3+ energy levels at 10 K and at room temperature. Second order crystal field parameters were found to be B 20 = -66 cm-1 and B 22 = -665 cm-1 at 10 K and B 20 = -78 cm-1 and B 22 = -602 cm-1 at room temperature, while for the crystal field strength the value of 1495 cm-1 was calculated at 10 K and 1355 cm-1 at room temperature.
Zhang, Dongdong; Bai, Fang; Sun, Liping; Wang, Yong; Wang, Jinguo
2017-01-01
The compression properties and electrical conductivity of in-situ 20 vol.% nano-sized TiCx/Cu composites fabricated via combustion synthesis and hot press in Cu-Ti-CNTs system at various particles size and morphology were investigated. Cubic-TiCx/Cu composite had higher ultimate compression strength (σUCS), yield strength (σ0.2), and electric conductivity, compared with those of spherical-TiCx/Cu composite. The σUCS, σ0.2, and electrical conductivity of cubic-TiCx/Cu composite increased by 4.37%, 20.7%, and 17.8% compared with those of spherical-TiCx/Cu composite (526 MPa, 183 MPa, and 55.6% International Annealed Copper Standard, IACS). Spherical-TiCx/Cu composite with average particle size of ~94 nm exhibited higher ultimate compression strength, yield strength, and electrical conductivity compared with those of spherical-TiCx/Cu composite with 46 nm in size. The σUCS, σ0.2, and electrical conductivity of spherical-TiCx/Cu composite with average size of ~94 nm in size increased by 17.8%, 33.9%, and 62.5% compared with those of spherical-TiCx/Cu composite (417 MPa, 121 MPa, and 40.3% IACS) with particle size of 49 nm, respectively. Cubic-shaped TiCx particles with sharp corners and edges led to stress/strain localization, which enhanced the compression strength of the composites. The agglomeration of spherical-TiCx particles with small size led to the compression strength reduction of the composites. PMID:28772859
Rose, D. V.; Madrid, E. A.; Welch, D. R.; ...
2015-03-04
Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
Chemical Shift Encoded Water–Fat Separation Using Parallel Imaging and Compressed Sensing
Sharma, Samir D.; Hu, Houchun H.; Nayak, Krishna S.
2013-01-01
Chemical shift encoded techniques have received considerable attention recently because they can reliably separate water and fat in the presence of off-resonance. The insensitivity to off-resonance requires that data be acquired at multiple echo times, which increases the scan time as compared to a single echo acquisition. The increased scan time often requires that a compromise be made between the spatial resolution, the volume coverage, and the tolerance to artifacts from subject motion. This work describes a combined parallel imaging and compressed sensing approach for accelerated water–fat separation. In addition, the use of multiscale cubic B-splines for B0 field map estimation is introduced. The water and fat images and the B0 field map are estimated via an alternating minimization. Coil sensitivity information is derived from a calculated k-space convolution kernel and l1-regularization is imposed on the coil-combined water and fat image estimates. Uniform water–fat separation is demonstrated from retrospectively undersampled data in the liver, brachial plexus, ankle, and knee as well as from a prospectively undersampled acquisition of the knee at 8.6x acceleration. PMID:22505285
Patrick D. Miles; David Heinzen; Manfred E. Mielke; Christopher W. Woodall; Brett J. Butler; Ron J. Piva; Dacia M. Meneguzzo; Charles H. Perry; Dale D. Gormanson; Charles J. Barnett
2011-01-01
The second full annual inventory of Minnesota's forests reports 17 million acres of forest land with an average volume of more than 1,000 cubic feet per acre. Forest land is dominated by the aspen forest type, which occupies nearly 30 percent of the total forest land area. Twenty-eight percent of forest land consists of sawtimber, 35 percent poletimber, 35 percent...
Thomas A. Albright; William H. McWilliams; Richard H. Widmann; Brett J. Butler; Susan J. Crocker; Cassandra M. Kurtz; Shawn Lehman; Tonya W. Lister; Patrick D. Miles; Randall S. Morin; Rachel Riemann; James E. Smith
2017-01-01
This report summarizes the third cycle of annualized inventory of Pennsylvania with field data collected from 2009 through 2014. Pennsylvania has 16.9 million acres of forest land dominated by sawtimber stands of oak/hickory and maple/beech/birch forest-type groups. Volumes continue to increase as the forests age with an average of 2,244 cubic feet per acre on...
Randall S. Morin; Chuck J. Barnett; Gary J. Brand; Brett J. Butler; Robert De Geus; Mark H. Hansen; Mark A. Hatfield; Cassandra M. Kurtz; W. Keith Moser; Charles H. Perry; Ron Piva; Rachel Riemann; Richard Widmann; Sandy Wilmot; Chris W. Woodall
2011-01-01
The first full annual inventory of Vermont's forests reports more than 4.5 million acres of forest land with an average volume of more than 2,200 cubic feet per acre. Forest land is dominated by the maple/beech/birch forest-type group, which occupies 70 percent of total forest land area. Sixty-three percent of forest land consists of large-diameter trees, 27...
Randall S. Morin; Chuck J. Barnett; Gary J. Brand; Brett J. Butler; Grant M. Domke; Susan Francher; Mark H. Hansen; Mark A. Hatfield; Cassandra M. Kurtz; W. Keith Moser; Charles H. Perry; Ron Piva; Rachel Riemann; Chris W. Woodall
2011-01-01
The first full annual inventory of New Hampshire's forests reports nearly 4.8 million acres of forest land with an average volume of nearly 2,200 cubic feet per acre. Forest land is dominated by the maple/beech/birch forest-type group, which occupies 53 percent of total forest land area. Fifty-seven percent of forest land consists of large-diameter trees, 32...
A preview of New Hampshire's forest resource
Joseph E. Barnard; Teresa M. Bowers
1974-01-01
Forest continues to be the dominant land use in New Hampshire. Three inventories of the State between 1948 and 1973 show little change in the total forest area but significant shifts in forest type and stand size. Average volume per acre has increased to over 1,400 cubic feet and 2,785 board feet. Growth continues to exceed removals.
Charles H. Perry; Vern A. Everson; Brett J. Butler; Susan J. Crocker; Sally E. Dahir; Andrea L. Diss-Torrance; Grant M Domke; Dale D. Gormanson; Sarah K. Herrick; Steven S. Hubbard; Terry R. Mace; Patrick D. Miles; Mark D. Nelson; Richard B. Rodeout; Luke T. Saunders; Kirk M. Stueve; Barry T. Wilson; Christopher W. Woodall
2012-01-01
The second full annual inventory of Wisconsin's forests reports more than 16.7 million acres of forest land with an average volume of more than 1,400 cubic feet per acre. Forest land is dominated by the oak/hickory forest-type group, which occupies slightly more than one quarter of the total forest land area; the maple/beech/birch forest-type group occupies an...
Patrick D. Miles; Curtis L. VanderSchaaf; Charles Barnett; Brett J. Butler; Susan J. Crocker; Dale D. Gormanson; Cassandra M. Kurtz; Tonya W. Lister; William H. McWilliams; Randall S. Morin; Mark D. Nelson; Charles H. (Hobie) Perry; Rachel I. Riemann; James E. Smith; Brian F. Walters; Jim Westfall; Christopher W. Woodall
2016-01-01
The third full annual inventory of Minnesota forests reports 17.4 million acres of forest land with an average live tree volume of 1,096 cubic feet per acre. Forest land is dominated by the aspen forest type, which occupies 29 percent of the total forest land area. Twenty-eight percent of forest land consists of sawtimber, 35 percent poletimber, 36 percent sapling/...
Dacia M Meneguzzo; Susan J. Crocker; Mark D. Nelson; Charles J. Barnett; Brett J. Butler; Grant M. Domke; Mark H. Hansen; Mark A. Hatfield; Greg C. Liknes; Andrew J. Lister; Tonya W. Lister; Ronald J. Piva; Barry T. (Ty) Wilson; Christopher W. Woodall
2012-01-01
The second full annual inventory of Nebraska's forests reports more than 1.5 million acres of forest land and 39 tree species. Forest land is dominated by the elm/ash/cottonwood and oak/hickory forest types, which occupy nearly half of the total forest land area. The volume of growing stock on timberland currently totals 1.1 billion cubic feet. The average annual...
Mark D. Nelson; Matt Brewer; Christopher W. Woodall; Charles H. Perry; Grant M. Domke; Ronald J. Piva; Cassandra M. Kurtz; W. Keith Moser; Tonya W. Lister; Brett J. Butler; Dacia M. Meneguzzo; Patrick D. Miles; Charles J. Barnett; Dale Gormanson
2011-01-01
The second full annual inventory of Iowa's forests (2004-2008) reports more than 3 million acres of forest land, almost all of which is timberland (98 percent), with an average volume of more than 1,000 cubic feet of growing stock per acre. American elm and eastern hophornbeam are the most numerous tree species, but silver maple and bur oak predominate in terms of...
Gus Raeker; W. Keith Moser; Brett J. Butler; John Fleming; Dale D. Gormanson; Mark H. Hansen; Cassandra M. Kurtz; Patrick D. Miles; Mike Morris; Thomas B. Treiman
2011-01-01
The second full annual inventory of Missouri's forests (2004-2008) reports more than 15 million acres of forest land, almost all of which is timberland (98 percent), with an average volume of more than 1,117 cubic feet of growing stock per acre. White oak and black oak are the most abundant in terms of live tree volume. Eighty-three percent of the State's...
Susan J. Crocker; Mark D. Nelson; Charles J. Barnett; Brett J. Butler; Grant M. Domke; Mark H. Hansen; Mark A. Hatfield; Tonya W. Lister; Dacia M. Meneguzzo; Ronald J. Piva; Barry T. Wilson; Christopher W. Woodall
2013-01-01
The second full annual inventory of Illinois' forests, completed in 2010, reports more than 4.8 million acres of forest land and 97 tree species. Forest land is dominated by oak/hickory and elm/ash/cottonwood forest-type groups, which occupy 93 percent of total forest land area. The volume of growing stock on timberland totals 7.2 billion cubic feet. The average...
77 FR 66149 - Significant New Use Rules on Certain Chemical Substances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-02
... ecological structural activity relationship (EcoSAR) analysis of test data on analogous esters, EPA predicts... milligram/cubic meter (mg/m\\3\\) as an 8-hour time-weighted average. In addition, based on EcoSAR analysis of... the PMN substance via the inhalation route. In addition, based on EcoSAR analysis of test data on...
Howle, James F.; Alpers, Charles N.; Bawden, Gerald W.; Bond, Sandra
2016-07-28
High-resolution ground-based light detection and ranging (lidar), also known as terrestrial laser scanning, was used to quantify the volume of mercury-contaminated sediment eroded from a stream cutbank at Stocking Flat along Deer Creek in the Sierra Nevada foothills, about 3 kilometers west of Nevada City, California. Terrestrial laser scanning was used to collect sub-centimeter, three-dimensional images of the complex cutbank surface, which could not be mapped non-destructively or in sufficient detail with traditional surveying techniques.The stream cutbank, which is approximately 50 meters long and 8 meters high, was surveyed on four occasions: December 1, 2010; January 20, 2011; May 12, 2011; and February 4, 2013. Volumetric changes were determined between the sequential, three-dimensional lidar surveys. Volume was calculated by two methods, and the average value is reported. Between the first and second surveys (December 1, 2010, to January 20, 2011), a volume of 143 plus or minus 15 cubic meters of sediment was eroded from the cutbank and mobilized by Deer Creek. Between the second and third surveys (January 20, 2011, to May 12, 2011), a volume of 207 plus or minus 24 cubic meters of sediment was eroded from the cutbank and mobilized by the stream. Total volumetric change during the winter and spring of 2010–11 was 350 plus or minus 28 cubic meters. Between the third and fourth surveys (May 12, 2011, to February 4, 2013), the differencing of the three-dimensional lidar data indicated that a volume of 18 plus or minus 10 cubic meters of sediment was eroded from the cutbank. The total volume of sediment eroded from the cutbank between the first and fourth surveys was 368 plus or minus 30 cubic meters.
Physical and hydrologic characteristics of Matlacha Pass, southwestern Florida
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, R.L.; Russell, G.M.
1994-03-01
Matlacha Pass is part of the connected inshore waters of the Charlotte Harbor estuary in southwestern Florida. Bathymetry indicates that depths in the main channel of the pass range from 4 to 14 feet below sea level. The channel averages about 8 feet deep in the northern part of the pass and about 5 feet deep in the southern part. Additionally, depths average about 4 feet in a wide section of the middle of the pass and about 2 feet along the mangrove swamps near the shoreline. Tidal flow within Matlacha Pass varies depending on aquatic vegetation densities, oyster beds,more » and tidal flats. Surface-water runoff occurs primarily during the wet season (May to September), with most of the flow entering the Matlacha Pass through two openings in the spreader canal system near the city of Matlacha. Freshwater flow into the pass from the north Cape Coral spreader canal system averaged 113 cubic feet per second from October 1987 to September 1992. Freshwater inflow from the Aries Canal of the south Cape Coral spreader canal system averaged 14.1 cubic feet per second from October 1989 to September 1992. Specific conductance throughout Matlacha Pass ranged from less than 1,000 to 57,000 microsiemens per centimeter. Specific conductance, collected from a continuous monitoring data logger in the middle of the pass from February to September 1992, averaged 36,000 microsiemens per centimeter at 2 feet below the water surface and 40,000 microsiemens per centimeter at 2 feet above the bottom. During both the wet and dry seasons, specific conductance indicated that the primary mixing of tidal waters and freshwater inflow occurs in the mangrove swamps along the shoreline.« less
Grannemann, N.G.
1984-01-01
Sands Plain, a 225-square mile area, is near the Marquette iron-mining district in Michigan's Upper Peninsula. Gribben Basin, a settling basin for disposal of waste rock particles from iron-ore concentration, is in the western part. Because Sands Plain is near iron-ore deposits, but not underlain by them, parts of the area are being considered as sites for additional tailings basins. Glacial deposits, as much as 500 feet thick, comprise the principal aquifer. Most ground water flows through the glacial deposits and discharges in a series of nearly parallel tributaries to the Chocolay River which flows into Lake Superior. Ninety-five percent of the discharge of these streams is ground-water runoff. The aquifer is recharged by precipitation at an average rate of 15 inches per year and by streamflow losses from the upper reaches of Goose Lake Outlet at an average rate of 2 inches per year. Precipitation collected at two sites had mean pH values of 4.0; rates of deposition of sulfate and total dissolved nitrogen were estimated to be 17.4 and 5.8 pounds per acre per year, respectively. Dissolved-solids concentrations in water from streams ranged from 82 to 143 milligrams per liter; sulfate ranged from 4.2 to 10 milligrams per liter. Calcium and bicarbonate were the principal dissolved substances. Highest dissolved-solids concentrations in water from wells in glacial deposits were found in a major buried valley east of Goose Lake Outlet. These concentrations ranged from 14 to 246 milligrams per liter; sulfate concentrations ranged from 0.9 to 53 milligrams per liter. Because of the high ground-water component of streamflow, mean concentrations of total nitrogen and trace metals in surface water do not differ significantly from mean concentrations in ground water. A two-dimensional digital model of ground-water flow was used to simulate water levels and ground-water runoff under steady-state and transient conditions Predictive simulations with the steady-state model were made to determine the effects of continued operation of Gribben tailings basin and construction and operation of four hypothetical tailings basins. Operation of Gribben Basin has decreased the average rate of ground-water flow to Goose Lake Outlet by 0.9 to 1.6 cubic feet per second but has increased the average rate of groundwater flow to Warner Creek by about 0.2 cubic foot per second. Continued filling of the tailings basin to its design capacity is expected to cause a slight increase in leakage from the basin to Goose Lake Outlet.Four hypothetical tailings basins, comprising a total of 11 square miles, were simulated by successively adding one more basin to the previous basin configuration. Net ground-water flow to streams was reduced by the simulated basins. The magnitude of these reductions depends on engineering decisions about the method of basin construction and a better understanding of the hydraulic properties of the materials used to seal the basin perimeters. The maximum total reduction in ground-water runoff due to construction and operation of 11 square miles of tailings basins is about 18 cubic feet per second compared to flow simulated by a steady-state simulation without tailings basins. If bottom sealing, rather than slurry wall construction, is used for one of the hypothetical basins, the total maximum reduction is 7.5 cubic feet per second. Under some assumed conditions, leakage from the tailings basins may slightly increase ground-water flow to Goose Lake Outlet and Warner Creek. The maximum probable leakage from all tailings basins is about 7 cubic feet per second; the minimum probable leakage is about 0.7 cubic foot per second.
Linear diffusion-wave channel routing using a discrete Hayami convolution method
Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin
2014-01-01
The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras
NASA Astrophysics Data System (ADS)
Angel, Eitan
2010-09-01
In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.
Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems.
Oh, Sang-Il; Kang, Hang-Bong
2017-01-22
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 × 370 image, whereas the original selective search method extracted approximately 10 6 × n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.
Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant; González, Fabio
2018-01-01
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%.
Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems
Oh, Sang-Il; Kang, Hang-Bong
2017-01-01
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226×370 image, whereas the original selective search method extracted approximately 106×n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset. PMID:28117742
Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant; González, Fabio
2018-01-01
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%. PMID:29795581
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Stochastic reaction-diffusion algorithms for macromolecular crowding
NASA Astrophysics Data System (ADS)
Sturrock, Marc
2016-06-01
Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.
Continuum modeling of three-dimensional truss-like space structures
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Hefzy, M. S.
1978-01-01
A mathematical and computational analysis capability has been developed for calculating the effective mechanical properties of three-dimensional periodic truss-like structures. Two models are studied in detail. The first, called the octetruss model, is a three-dimensional extension of a two-dimensional model, and the second is a cubic model. Symmetry considerations are employed as a first step to show that the specific octetruss model has four independent constants and that the cubic model has two. The actual values of these constants are determined by averaging the contributions of each rod element to the overall structure stiffness. The individual rod member contribution to the overall stiffness is obtained by a three-dimensional coordinate transformation. The analysis shows that the effective three-dimensional elastic properties of both models are relatively close to each other.
Classification of breast cancer cytological specimen using convolutional neural network
NASA Astrophysics Data System (ADS)
Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman
2017-01-01
The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.
Classification of volcanic ash particles using a convolutional neural network and probability.
Shoji, Daigo; Noguchi, Rina; Otsuki, Shizuka; Hino, Hideitsu
2018-05-25
Analyses of volcanic ash are typically performed either by qualitatively classifying ash particles by eye or by quantitatively parameterizing its shape and texture. While complex shapes can be classified through qualitative analyses, the results are subjective due to the difficulty of categorizing complex shapes into a single class. Although quantitative analyses are objective, selection of shape parameters is required. Here, we applied a convolutional neural network (CNN) for the classification of volcanic ash. First, we defined four basal particle shapes (blocky, vesicular, elongated, rounded) generated by different eruption mechanisms (e.g., brittle fragmentation), and then trained the CNN using particles composed of only one basal shape. The CNN could recognize the basal shapes with over 90% accuracy. Using the trained network, we classified ash particles composed of multiple basal shapes based on the output of the network, which can be interpreted as a mixing ratio of the four basal shapes. Clustering of samples by the averaged probabilities and the intensity is consistent with the eruption type. The mixing ratio output by the CNN can be used to quantitatively classify complex shapes in nature without categorizing forcibly and without the need for shape parameters, which may lead to a new taxonomy.
Processing of chromatic information in a deep convolutional neural network.
Flachot, Alban; Gegenfurtner, Karl R
2018-04-01
Deep convolutional neural networks are a class of machine-learning algorithms capable of solving non-trivial tasks, such as object recognition, with human-like performance. Little is known about the exact computations that deep neural networks learn, and to what extent these computations are similar to the ones performed by the primate brain. Here, we investigate how color information is processed in the different layers of the AlexNet deep neural network, originally trained on object classification of over 1.2M images of objects in their natural contexts. We found that the color-responsive units in the first layer of AlexNet learned linear features and were broadly tuned to two directions in color space, analogously to what is known of color responsive cells in the primate thalamus. Moreover, these directions are decorrelated and lead to statistically efficient representations, similar to the cardinal directions of the second-stage color mechanisms in primates. We also found, in analogy to the early stages of the primate visual system, that chromatic and achromatic information were segregated in the early layers of the network. Units in the higher layers of AlexNet exhibit on average a lower responsivity for color than units at earlier stages.
A convolutional neural network for intracranial hemorrhage detection in non-contrast CT
NASA Astrophysics Data System (ADS)
Patel, Ajay; Manniesing, Rashindra
2018-02-01
The assessment of the presence of intracranial hemorrhage is a crucial step in the work-up of patients requiring emergency care. Fast and accurate detection of intracranial hemorrhage can aid treating physicians by not only expediting and guiding diagnosis, but also supporting choices for secondary imaging, treatment and intervention. However, the automatic detection of intracranial hemorrhage is complicated by the variation in appearance on non-contrast CT images as a result of differences in etiology and location. We propose a method using a convolutional neural network (CNN) for the automatic detection of intracranial hemorrhage. The method is trained on a dataset comprised of cerebral CT studies for which the presence of hemorrhage has been labeled for each axial slice. A separate test dataset of 20 images is used for quantitative evaluation and shows a sensitivity of 0.87, specificity of 0.97 and accuracy of 0.95. The average processing time for a single three-dimensional (3D) CT volume was 2.7 seconds. The proposed method is capable of fast and automated detection of intracranial hemorrhages in non-contrast CT without being limited to a specific subtype of pathology.
Pappas, E; Maris, T G; Papadakis, A; Zacharopoulou, F; Damilakis, J; Papanikolaou, N; Gourtsoyiannis, N
2006-10-01
The aim of this work is to investigate experimentally the detector size effect on narrow beam profile measurements. Polymer gel and magnetic resonance imaging dosimetry was used for this purpose. Profile measurements (Pm(s)) of a 5 mm diameter 6 MV stereotactic beam were performed using polymer gels. Eight measurements of the profile of this narrow beam were performed using correspondingly eight different detector sizes. This was achieved using high spatial resolution (0.25 mm) two-dimensional measurements and eight different signal integration volumes A X A X slice thickness, simulating detectors of different size. "A" ranged from 0.25 to 7.5 mm, representing the detector size. The gel-derived profiles exhibited increased penumbra width with increasing detector size, for sizes >0.5 mm. By extrapolating the gel-derived profiles to zero detector size, the true profile (Pt) of the studied beam was derived. The same polymer gel data were also used to simulate a small-volume ion chamber profile measurement of the same beam, in terms of volume averaging. The comparison between these results and actual corresponding small-volume chamber profile measurements performed in this study, reveal that the penumbra broadening caused by both volume averaging and electron transport alterations (present in actual ion chamber profile measurements) is a lot more intense than that resulted by volume averaging effects alone (present in gel-derived profiles simulating ion chamber profile measurements). Therefore, not only the detector size, but also its composition and tissue equivalency is proved to be an important factor for correct narrow beam profile measurements. Additionally, the convolution kernels related to each detector size and to the air ion chamber were calculated using the corresponding profile measurements (Pm(s)), the gel-derived true profile (Pt), and convolution theory. The response kernels of any desired detector can be derived, allowing the elimination of the errors associated with narrow beam profile measurements.
Averaging of elastic constants for polycrystals
Blaschke, Daniel N.
2017-10-13
Many materials of interest are polycrystals, i.e., aggregates of single crystals. Randomly distributed orientations of single crystals lead to macroscopically isotropic properties. Here in this paper, we briefly review strategies of calculating effective isotropic second and third order elastic constants from the single crystal ones. Our main emphasis is on single crystals of cubic symmetry. Specifically, the averaging of third order elastic constants has not been particularly successful in the past, and discrepancies have often been attributed to texturing of polycrystals as well as to uncertainties in the measurement of elastic constants of both poly and single crystals. While thismore » may well be true, we also point out here shortcomings in the theoretical averaging framework.« less
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
Single image super-resolution based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia
2018-03-01
We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
Fifty-year flood-inundation maps for Juticalpa, Honduras
Kresch, David L.; Mastin, M.C.; Olsen, T.D.
2002-01-01
After the devastating floods caused by Hurricane Mitch in 1998, maps of the areas and depths of 50-year-flood inundation at 15 municipalities in Honduras were prepared as a tool for agencies involved in reconstruction and planning. This report, which is one in a series of 15, presents maps of areas in the municipality of Juticalpa that would be inundated by a 50-year flood of Rio Juticalpa. Geographic Information System (GIS) coverages of the flood inundation are available on a computer in the municipality of Juticalpa as part of the Municipal GIS project and on the Internet at the Flood Hazard Mapping Web page (http://mitchnts1.cr.usgs.gov/projects/floodhazard.html). These coverages allow users to view the flood inundation in much more detail than is possible using the maps in this report. Water-surface elevations for a 50-year-flood on Rio Juticalpa at Juticalpa were estimated using HEC-RAS, a one-dimensional, steady-flow, step-backwater computer program. The channel and floodplain cross sections used in HEC-RAS were developed from an airborne light-detection-and-ranging (LIDAR) topographic survey of the area. The estimated 50-year-flood discharge for Rio Juticalpa at Juticalpa, 1,360 cubic meters per second, was computed as the drainage-area-adjusted weighted average of two independently estimated 50-year-flood discharges for the gaging station Rio Juticalpa en El Torito, located about 2 kilometers upstream from Juticalpa. One discharge, 1,551 cubic meters per second, was estimated from a frequency analysis of the 33 years of peak-discharge record for the gage, and the other, 486 cubic meters per second, was estimated from a regression equation that relates the 50-year-flood discharge to drainage area and mean annual precipitation. The weighted-average of the two discharges at the gage is 1,310 cubic meters per second. The 50-year flood discharge for the study area reach of Rio Juticalpa was estimated by multiplying the weighted discharge at the gage by the ratio of the drainage areas upstream from the two locations.
2007-06-01
17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
NASA Technical Reports Server (NTRS)
Asbury, Scott C.; Hunter, Craig A.
1999-01-01
An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo I.; Shlesinger, Michael F.
2012-01-01
We introduce and explore a Stochastic Flow Cascade (SFC) model: A general statistical model for the unidirectional flow through a tandem array of heterogeneous filters. Examples include the flow of: (i) liquid through heterogeneous porous layers; (ii) shocks through tandem shot noise systems; (iii) signals through tandem communication filters. The SFC model combines together the Langevin equation, convolution filters and moving averages, and Poissonian randomizations. A comprehensive analysis of the SFC model is carried out, yielding closed-form results. Lévy laws are shown to universally emerge from the SFC model, and characterize both heavy tailed retention times (Noah effect) and long-ranged correlations (Joseph effect).
Grozdov, D S; Kolotov, V P; Lavrukhin, Yu E
2016-04-01
A method of full energy peak efficiency estimation in the space around scintillation detector, including the presence of a collimator, has been developed. It is based on a mathematical convolution of the experimental results with the following data extrapolation. The efficiency data showed the average uncertainty less than 10%. Software to calculate integral efficiency for nuclear power plant plume was elaborated. The paper also provides results of nuclear power plant plume height estimation by analysis of the spectral data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Susan J. Crocker; Mark D. Nelson; Charles J. Barnett; Gary J. Brand; Brett J. Butler; Grant M. Domke; Mark H. Hansen; Mark A. Hatfield; Tonya W. Lister; Dacia M. Meneguzzo; Charles H. Perry; Ronald J. Piva; Barry T. Wilson; Christopher W. Woodall; Bill Zipse
2011-01-01
The first full annual inventory of New Jersey's forests reports more than 2.0 million acres of forest land and 83 tree species. Forest land is dominated by oak-hickory forest types in the north and pitch pine forest types in the south. The volume of growing stock on timberland has been rising since 1956 and currently totals 3.4 billion cubic feet. The average...
Christopher W. Woodall; Mark N. Webb; Barry T. Wilson; Jeff Settle; Ron J. Piva; Charles H. Perry; Dacia M. Meneguzzo; Susan J. Crocker; Brett J. Butler; Mark Hansen; Mark Hatfield; Gary Brand; Charles Barnett
2011-01-01
The second full annual inventory of Indiana's forests reports more than 4.75 million acres of forest land with an average volume of more than 2,000 cubic feet per acre. Forest land is dominated by the white oak/red oak/hickory forest type, which occupies nearly a third of the total forest land area. Seventy-six percent of forest land consists of sawtimber, 16...
David E. Haugen; Robert Harsel; Aaron Bergdahl; Tom Claeys; Christopher W. Woodall; Barry T. Wilson; Susan J. Crocker; Brett J. Butler; Cassandra M. Kurtz; Mark A. Hatfield; Charles H. Barnett; Grant Domke; Dan Kaisershot; W. Keith Moser; Andrew J. Lister; Dale D. Gormanson
2013-01-01
The second annual inventory of North Dakota's forests reports more than 772,000 acres of forest land with an average volume of more than 921 cubic feet per acre. Forest land is dominated by the bur oak forest type, which occupies more than a third of the total forest land area. The poletimber stand-size class represents 39 percent of forest land, followed by...
Randall S. Morin; Gregory W. Cook; Charles J. Barnett; Brett J. Butler; Susan J. Crocker; Mark A. Hatfield; Cassandra M. Kurtz; Tonya W. Lister; William G. Luppold; William H. McWilliams; Patrick D. Miles; Mark D. Nelson; Charles H. (Hobie) Perry; Ronald J. Piva; James E. Smith; Jim Westfall; Richard H. Widmann; Christopher W. Woodall
2016-01-01
The annual inventory of West Virginia's forests, completed in 2013, covers nearly 12.2 million acres of forest land with an average volume of more than 2,300 cubic feet per acre. This report is based data collected from 2,808 plots located across the State. Forest land is dominated by the oak/hickory forest-type group, which occupies 74 percent of total forest...
Initial thinning effects in 70- to 150-year-old Douglas-fir--western Oregon and Washington.
Richard L. Williamson; Frank E. Price
1971-01-01
Vigorous, mature (post-rotation age) Douglas-fir stands will probably exist for another 50 years or more on some properties in western Oregon and Washington. Intermediate harvests in the form of thinnings were analyzed on nine study areas ranging from 70 to 150 years old when thinned. Recoverable cubic-volume growth, averaging 81 percent of normal...
Economic benefits of reducing fire-related sediment in southwestern fire-prone ecosystems
John Loomis; Pete Wohlgemuth; Armando González-Cabán; Don English
2003-01-01
A multiple regression analysis of fire interval and resulting sediment yield (controlling for relief ratio, rainfall, etc.) indicates that reducing the fire interval from the current average 22 years to a prescribed fire interval of 5 years would reduce sediment yield by 2 million cubic meters in the 86.2 square kilometer southern California watershed adjacent to and...
Changes in product recovery between live and dead lodgepole pine: a compendium.
Thomas D. Fahey; Thomas A. Snellgrove; Marlin E. Plank
1986-01-01
Six studies were used to compare differences in recovery of volume and value among live, recent dead, and older dead lodgepole pine (Pinus contorts Dougl. ex Loud.) in the Western United States. The products studied included boards, random-length dimension, studs, and veneer. For the average size log (12 cubic feet) absolute values were highest for boards, followed by...
Low-flow study for southwest Ohio streams
Webber, Earl E.; Mayo, Ronald I.
1971-01-01
Low-flow discharges at 60 sites on streams in the Little Miami River, Mill Creek, Great Miami River and Wabash River basins are presented in this report. The average annual minimum flows in cubic feet per second (cfs) for a 7-day period of 10-year frequency and a 1-day period of 30-year frequency are computed for each of the 60 sites.
Mark D. Nelson; Charles J. Barnett; Matt Brewer; Brett J. Butler; Susan J. Crocker; Grant M. Domke; Dale D. Gormanson; Cassandra M. Kurtz; Tonya W. Lister; Stephen Matthews; William H. McWilliams; Dacia M. Meneguzzo; Patrick D. Miles; Randall S. Morin; Ronald J. Piva; Rachel Riemann; James E. Smith; Brian F. Walters; Jim Westfall; Christopher W. Woodall
2016-01-01
The third full annual inventory of Iowa's forests (2009-2013) indicates that just under 3 million acres of forest land exists in the State, 81 percent of which is in family forest ownership. Almost all of Iowa's forest land is timberland (96 percent), with an average volume of more than 1,000 cubic feet of growing stock per acre on timberland and more than 1,...
Thirty-five-year growth of ponderosa pine saplings in response to thinning and understory removal.
P.H. Cochran; James W. Barrett
1999-01-01
Diameter increments for individual trees increased curvilinearly and stand basal area increments decreased curvilinearly as spacing increased from 6.6 to 26.4 feet. Average height growth of all trees increased linearly, and stand cubic volume growth decreased linearly as spacing increased. Large differences in tree sizes developed over the 35 years of study with...
Growth Comparisons of Planted Sweetgum and Sycamore
R. M. Krinard
1988-01-01
From age 18 through age 23, average annual growth of planted sweetgum (Liquidambar styraciflua L.) on Commerce silt loam exceeded growth of sweetgum on Sharkey clay by about 45 percent in diameter at breast height (d.b.h.) and height, 75 percent in basal area, and more than three times in cubic volume. At age 18 on the Commerce soil, sycamore (
A preview of New Jersey's forest resource
Joseph E. Barnard; Teresa M. Bowers
1973-01-01
The recently completed forest survey of New Jersey indicates that 54 percent of the land area has tree cover on it. Thirty-eight percent of the state is classified as commercial forest land. Total growing-stock volume has increased, although the softwood component of the resource has decreased in both cubic-foot volume and area occupied by the softwood types. Average...
Controle de la morphologie d'hydrogels poreux a partir de structures polymeres
NASA Astrophysics Data System (ADS)
Esquirol, Anne-Laure
This master thesis presents a new fabrication method to prepare hydrogels with fully interconnected and tunable macropore networks prepared with co-continuous polymer blends. The main contributions are: (1) a hydrogel fabrication process providing a high control over the average pore size diameter, their volume fraction and their interconnectivity; (2) the microstructural characterization of porous hydrogels with new techniques such as X-ray microtomography and (3) the preparation of porous gels with industrial equipment such as extruders and injection molding presses. The development and improvement of methods and techniques to prepare porous polymers and porous gels have been intensive areas of research in materials science over the past 20 years because of their potential use in fields as diverse as high performance membranes and filtration devices, supports for catalysis and biochemical reactions, encapsulating devices for drug release, and scaffolds for cells seeding and proliferation. For this last application, in tissue engineering, some typical parameters related to porosity must be rigorously controlled: (1) the average pore size diameter; (2) the pore volume fraction; (3) the pore interconnectivity. Porous hydrogels are excellent candidates due to their similarities with the extracellular matrix (composition, mechanical properties and diffusion properties). A certain number of methods and techniques have been developed and studied to prepare gels comprising microstructured 3-D networks of (more or less) interconnected pores (also called sometimes microfluidic gels or (macro)porous gels). Poly(L-lactide) (PLA) porous materials were realized from immiscible and co-continuous binary blends of polystyrene/poly(L-lactide) (PS/PLA) at 50/50 %vol prepared by different methods : (1) internal mixer (cubic samples with 0.8 mm sides) and (2) extrusion followed by injection molding which allows the fabrication of bars with superior dimensions (0.95 cm x 1.25 cm x 6.3 cm). Quiescent annealing of the binary blends was performed at 190 °C to tune the characteristic dimensions of the co-continuous morphology: (1) 0, 10, 30, 60 and 90 min for cubic samples and (2) 0, 10, 20 and 30 min for bars. Afterwards, the PLA phase has been isolated by a specific solvent extraction of the PS phase to obtain porous PLA molds. Gravimetric analysis have demonstrated a co-continuity superior to 95% for cubic samples and superior to 85% for the bars. This morphology was analyzed by scanning electron microscopy (SEM) for each annealing time (for the cubic samples). Image analysis performed on the SEM micrographs have demonstrated that the average pore diameter can range from 3 mum to over 400 mum and that the specific interfacial area ranges from 5800 cm-1 to 45 cm-1, for annealing times going from 0 min to 90 min). The porosity of the bars was observed by X-ray microtomography and shows that the average pore diameter ranges from 10 mum to 500 mum (annealing from 10 min to 30 min). Solutions of agar or alginate were subsequently injected into the PLA porous molds by using a manual injection system, followed by an in situ gelification. Visual inspections and optical microscope observations show a complete injection for molds with average pore sizes over 20 mum (cubic samples) and over 300 mum (for bars). These assumptions are also supported by the gels morphology characterization. The second polymer phase (PLA) was subsequently dissolved using a second selective solvent, leaving only the porous gel structures. X-ray microtomography analysis, which provide 2-D and 3-D images, have demonstrated that the morphologies of the porous gels are similar to the PLA molds microstructures. For example, porous gels prepared with cubic PLA molds annealed during 60 min, show an average pore size of about 285 mum (as compared to 200 mum for the PLA molds) and a specific interfacial area of 70 cm -1 (as compared to 100 cm-1 for the PLA molds). Similar results were obtained for the porous gels prepared with the porous PLA bars (qualitative observation). The effectiveness of two sterilization methods has been proven on nutrient agar (NA) and "Brain Heart Infusion" (BHI) with no bacterial colonies apparition. The first method is the freeze-drying followed by an oven treatment at 120 °C in a sterile environment. The porous gel morphology was characterized by X-ray microtomography before and after freeze-drying, and after rehydration, demonstrating the conservation of the macroscopic dimensions of the gels, of their morphologies and porosities. The second method is the successive baths in an ethanol solution. Finally mechanical compression tests have shown that porous gels, as can be expected, have a lower compressive resistance as compared to non-porous hydrogels. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Zhang, Yan; Chen, Hua-Xin; Duan, Li; Fan, Ji-Bin; Ni, Lei; Ji, Vincent
2018-07-01
Using density-functional perturbation theory, we systematically investigate the Born effective charges and dielectric properties of cubic, tetragonal, monoclinic, ortho-I (Pbca), ortho-II (Pnma) and ortho-III (Pca21) phases of ZrO2. The magnitudes of the Born effective charges of the Zr and oxygen atoms are greater than their nominal ionic valences (+4 for Zr and -2 for oxygen), indicating a strong dynamic charge transfer from Zr atoms to O atoms and a mixed covalent-ionic bonding in six phases of ZrO2. For all six phases of ZrO2, the electronic contributions εij∞ to the static dielectric constant are rather small (range from 5 to 6.5) and neither strongly anisotropic nor strongly dependent on the structural phase, while the ionic contributions εijion to the static dielectric constant are large and not only anisotropic but also dependent on the structural phase. The average dielectric constant εbar0 of the six ZrO2 phases decreases in the sequence of tetragonal, cubic, ortho-II (Pnma), ortho-I (Pbca), ortho-III (Pca21) and monoclinic. So among six phases of ZrO2, the tetragonal and cubic phases are two suitable phases to replace SiO2 as the gate dielectric material in modern integrated-circuit technology. Furthermore, for the tetragonal ZrO2 the best orientation is [100].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linaburg, Matthew R.; McClure, Eric T.; Majher, Jackson D.
The structures of the lead halide perovskites CsPbCl3 and CsPbBr3 have been determined from X-ray powder diffraction data to be orthorhombic with Pnma space group symmetry. Their structures are distorted from the cubic structure of their hybrid analogs, CH3NH3PbX3 (X = Cl, Br), by tilts of the octahedra (Glazer tilt system a–b+a–). Substitution of the smaller Rb+ for Cs+ increases the octahedral tilting distortion and eventually destabilizes the perovskite structure altogether. To understand this behavior, bond valence parameters appropriate for use in chloride and bromide perovskites have been determined for Cs+, Rb+, and Pb2+. As the tolerance factor decreases, themore » band gap increases, by 0.15 eV in Cs1–xRbxPbCl3 and 0.20 eV in Cs1–xRbxPbBr3, upon going from x = 0 to x = 0.6. The band gap shows a linear dependence on tolerance factor, particularly for the Cs1–xRbxPbBr3 system. Comparison with the cubic perovskites CH3NH3PbCl3 and CH3NH3PbBr3 shows that the band gaps of the methylammonium perovskites are anomalously large for APbX3 perovskites with a cubic structure. This comparison suggests that the local symmetry of CH3NH3PbCl3 and CH3NH3PbBr3 deviate significantly from the cubic symmetry of the average structure.« less
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
Hydrology of the Cave Springs area near Chattanooga, Hamilton County, Tennessee
Bradfield, Arthur D.
1992-01-01
The hydrology of Cave Springs, the second largest spring in East Tennessee,was investigated from July 1987 to September 1989. Wells near the spring supply about 5 million gallons per day of potable water to people in Hamilton County near Chattanooga. Discharge from the spring averaged about 13.5 cubic feet per second (8.72 million gallons per day) during the study period. Withdrawals by the Hixson Utility District from wells upgradient from the outflow averaged 8.6 cubic feet per second (5.54 million gallons per day). Aquifer tests using wells intersecting a large solution cavity supplying water to the spring showed a drawdown of less than 3 feet with a discharge of 9,000 gallons per minute or 20 cubic feet per second. Temperature and specific conductance of ground water near the spring outflow were monitored hourly. Temperatures ranged from 13.5 to 18.2 degrees celsius, and fluctuated seasonally in response to climate. Specific-conductance values ranged from 122 to 405 microsiemens per centimeter at 25 degrees Celsius, but were generally between 163 to 185 microsiemensper centimeter. The drainage area of the basin recharging the spring system was estimated to be 1O squaremiles. A potentiometric map of the recharge basin was developed from water levels measured at domestic and test wells in August 1989. Aquifer tests at five test wells in the study area indicated that specific-capacity values for these wells ranged from 4.1 to 261 gallons per minute per foot of drawdown. Water-quality characteristics of ground water in the area were used in conjunction with potentiometric-surface maps to delineate the approximate area contributing recharge to Cave Springs.
NASA Astrophysics Data System (ADS)
Pfeiffer, Andrew; Wohl, Ellen
2018-01-01
We used 48 reach-scale measurements of large wood and wood-associated sediment and coarse particulate organic matter (CPOM) storage within an 80 km2 catchment to examine spatial patterns of storage relative to stream order. Wood, sediment, and CPOM are not distributed uniformly across the drainage basin. Third- and fourth-order streams (23% of total stream length) disproportionately store wood and coarse and fine sediments: 55% of total wood volume, 78% of coarse sediment, and 49% of fine sediment, respectively. Fourth-order streams store 0.8 m3 of coarse sediment and 0.2 m3 of fine sediment per cubic meter of wood. CPOM storage is highest in first-order streams (60% of storage in 47% of total network stream length). First-order streams can store up to 0.3 m3 of CPOM for each cubic meter of wood. Logjams in third- and fourth-order reaches are primary sediment storage agents, whereas roots in small streams may be more important for storage of CPOM. We propose the large wood particulate storage index to quantify average volume of sediment or CPOM stored by a cubic meter of wood.
Percolation Network Study on the Gas Apparent Permeability of Rock
NASA Astrophysics Data System (ADS)
Wang, Y.; Tang, Y. B.; Li, M.
2017-12-01
We modeled the gas single phase transport behaviors of monomodal porous media using percolation networks. Different from the liquid absolute permeability, which is only related to topology and morphology of pore space, the gas permeability depends on pore pressure as well. A published gas flow conductance model, included usual viscous flow, slip flow and Knudsen diffusion in cylinder pipe, was used to simulated gas flow in 3D, simple cubic, body-center cubic and face-center cubic networks with different hydraulic radius, different coordination number, and different pipe radius distributions under different average pore pressure. The simulation results showed that the gas apparent permeability kapp obey the `universal' scaling law (independence of network lattices), kapp (z-zc)β, where exponent β is related to pore radius distribution, z is coordination number and zc=1.5. Following up on Bernabé et al.'s (2010) study of the effects of pore connectivity and pore size heterogeneity on liquid absolute permeability, gas apparent permeability kapp model and a new joint gas-liquid permeability (i.e., kapp/k∞) model, which could explain the Klinkenberg phenomenon, were proposed. We satisfactorily tested the models by comparison with published experimental data on glass beads and other datasets.
Synthesis of nano-scale fast ion conducting cubic Li7La3Zr2O12.
Sakamoto, Jeff; Rangasamy, Ezhiylmurugan; Kim, Hyunjoung; Kim, Yunsung; Wolfenstine, Jeff
2013-10-25
A solution-based process was investigated for synthesizing cubic Li7La3Zr2O12 (LLZO), which is known to exhibit the unprecedented combination of fast ionic conductivity, and stability in air and against Li. Sol-gel chemistry was developed to prepare solid metal-oxide networks consisting of 10 nm cross-links that formed the cubic LLZO phase at 600 ° C. Sol-gel LLZO powders were sintered into 96% dense pellets using an induction hot press that applied pressure while heating. After sintering, the average LLZO grain size was 260 nm, which is 13 times smaller compared to LLZO prepared using a solid-state technique. The total ionic conductivity was 0.4 mS cm(-1) at 298 K, which is the same as solid-state synthesized LLZO. Interestingly, despite the same room temperature conductivity, the sol-gel LLZO total activation energy is 0.41 eV, which 1.6 times higher than that observed in solid-state LLZO (0.26 eV). We believe the nano-scale grain boundaries give rise to unique transport phenomena that are more sensitive to temperature when compared to the conventional solid-state LLZO.
Phase stability and mechanical properties of Mo1-xNx with 0 ≤ x ≤ 1
NASA Astrophysics Data System (ADS)
Balasubramanian, Karthik; Huang, Liping; Gall, Daniel
2017-11-01
First-principle density-functional calculations coupled with the USPEX evolutionary phase-search algorithm are employed to calculate the convex hull of the Mo-N binary system. Eight molybdenum nitride compound phases are found to be thermodynamically stable: tetragonal β-Mo3N, hexagonal δ-Mo3N2, cubic γ-Mo11N8, orthorhombic ɛ-Mo4N3, cubic γ-Mo14N11, monoclinic σ-MoN and σ-Mo2N3, and hexagonal δ-MoN2. The convex hull is a straight line for 0 ≤ x ≤ 0.44 such that bcc Mo and the five listed compound phases with x ≤ 0.44 are predicted to co-exist in thermodynamic equilibrium. Comparing the convex hulls of cubic and hexagonal Mo1-xNx indicates that cubic structures are preferred for molybdenum rich (x < 0.3) compounds, and hexagonal phases are favored for nitrogen rich (x > 0.5) compositions, while similar formation enthalpies for cubic and hexagonal phases at intermediate x = 0.3-0.5 imply that kinetic factors play a crucial role in the phase formation. The volume per atom Vo of the thermodynamically stable Mo1-xNx phases decreases from 13.17 to 9.56 Å3 as x increases from 0.25 to 0.67, with plateaus at Vo = 11.59 Å3 for hexagonal and cubic phases and Vo = 10.95 Å3 for orthorhombic and monoclinic phases. The plateaus are attributed to the changes in the average coordination numbers of molybdenum and nitrogen atoms, which increase from 2 to 6 and decrease from 6 to 4, respectively, indicating an increasing covalent bonding character with increasing x. The change in bonding character and the associated phase change from hexagonal to cubic/orthorhombic to monoclinic cause steep increases in the isotropic elastic modulus E = 387-487 GPa, the shear modulus G = 150-196 GPa, and the hardness H = 14-24 GPa in the relatively narrow composition range x = 0.4-0.5. This also causes a drop in Poisson's ratio from 0.29 to 0.24 and an increase in Pugh's ratio from 0.49 to 0.64, indicating a ductile-to-brittle transition between x = 0.44 and 0.5.
A separable two-dimensional discrete Hartley transform
NASA Technical Reports Server (NTRS)
Watson, A. B.; Poirson, A.
1985-01-01
Bracewell has proposed the Discrete Hartley Transform (DHT) as a substitute for the Discrete Fourier Transform (DFT), particularly as a means of convolution. Here, it is shown that the most natural extension of the DHT to two dimensions fails to be separate in the two dimensions, and is therefore inefficient. An alternative separable form is considered, corresponding convolution theorem is derived. That the DHT is unlikely to provide faster convolution than the DFT is also discussed.
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog
NASA Astrophysics Data System (ADS)
Rosshidi, H. T.; Hadi, A. R.
2009-06-01
This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.
Foltz, T M; Welsh, B M
1999-01-01
This paper uses the fact that the discrete Fourier transform diagonalizes a circulant matrix to provide an alternate derivation of the symmetric convolution-multiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions using the notion of block circulant matrices and generalizes to multidimensional asymmetric sequences. The symmetric convolution of multidimensional asymmetric sequences can then be accomplished by taking the product of the trigonometric transforms of the sequences and then applying an inverse trigonometric transform to the result. An example is given of how this theory can be used for applying a two-dimensional (2-D) finite impulse response (FIR) filter with nonlinear phase which models atmospheric turbulence.
Molecular graph convolutions: moving beyond fingerprints
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-01-01
Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Unusual inhomogeneous microstructures in charge glass state of PbCrO3
NASA Astrophysics Data System (ADS)
Kurushima, Kosuke; Tsukasaki, Hirofumi; Ogata, Takahiro; Sakai, Yuki; Azuma, Masaki; Ishii, Yui; Mori, Shigeo
2018-05-01
We investigated the microstructures and local structures of perovskite PbCrO3, which shows a metal-to-insulator transition and a 9.8% volume collapse, by electron diffraction, high-resolution transmission electron microscopy (TEM), and high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM). It is revealed that the charge glass state is characterized by the unique coexistence of the crystalline state with a cubic symmetry on average and the noncrystalline state. HAADF-STEM observation at atomic resolution revealed that Pb ions were displaced from the ideal A site position of the cubic perovskite structure, which gives rise to characteristic diffuse scatterings around the fundamental Bragg reflections. These structural inhomogeneities are crucial to the understanding of the unique physical properties in the charge glass state of PbCrO3.
Digital-model simulation of the Toppenish alluvial aquifer, Yakima Indian Reservation, Washington
Bolke, E.L.; Skrivan, James A.
1981-01-01
Increasing demands for irrigating additional lands and proposals to divert water from the Yakima River by water users downstream from the Yakima Indian Reservation have made an accounting of water availability important for present-day water management in the Toppenish Creek basin. A digital model was constructed and calibrated for the Toppenish alluvial aquifer to help fulfill this need. The average difference between observed and model-calculated aquifer heads was about 4 feet. Results of model analysis show that the net gain from the Yakima River to the aquifer is 90 cubic feet per second, and the net loss from the aquifer to Toppenish Creek is 137 cubic feet per second. Water-level declines of about 5 feet were calculated for an area near Toppenish in response to a hypothetical tenfold increase in 1974 pumping rates. (USGS)
Channel change and bed-material transport in the Lower Chetco River, Oregon
Wallick, J. Rose; Anderson, Scott W.; Cannon, Charles; O'Connor, Jim E.
2010-01-01
The lower Chetco River is a wandering gravel-bed river flanked by abundant and large gravel bars formed of coarse bed-material sediment. Since the early twentieth century, the large gravel bars have been a source of commercial aggregate for which ongoing permitting and aquatic habitat concerns have motivated this assessment of historical channel change and sediment transport rates. Analysis of historical channel change and bed-material transport rates for the lower 18 kilometers shows that the upper reaches of the study area are primarily transport zones, with bar positions fixed by valley geometry and active bars mainly providing transient storage of bed material. Downstream reaches, especially near the confluence of the North Fork Chetco River, are zones of active sedimentation and channel migration.Multiple analyses, supported by direct measurements of bedload during winter 2008–09, indicate that since 1970 the mean annual flux of bed material into the study reach has been about 40,000–100,000 cubic meters per year. Downstream tributary input of bed-material sediment, probably averaging 5–30 percent of the influx coming into the study reach from upstream, is approximately balanced by bed-material attrition by abrasion. Probably little bed material leaves the lower river under natural conditions, with most net influx historically accumulating in wider and more dynamic reaches, especially near the North Fork Chetco River confluence, 8 kilometers upstream from the Pacific Ocean.The year-to-year flux, however, varies tremendously. Some years may have less than 3,000 cubic meters of bed material entering the study area; by contrast, some high-flow years, such as 1982 and 1997, likely have more than 150,000 cubic meters entering the reach. For comparison, the estimated annual volume of gravel extracted from the lower Chetco River for commercial aggregate during 2000–2008 has ranged from 32,000 to 90,000 cubic meters and averaged about 59,000 cubic meters per year. Mined volumes probably exceeded 140,000 cubic meters per year for several years in the late 1970s.Repeat surveys and map analyses indicate a reduction in bar area and sinuosity between 1939 and 2008, chiefly in the period 1965–95. Repeat topographic and bathymetric surveys show channel incision for substantial portions of the study reach, with local areas of bed lowering by as much as 2 meters. A specific gage analysis at the upstream end of the study reach indicates that incision and narrowing followed aggradation culminating in the late 1970s. These observations are all consistent with a reduction of sediment supply relative to transport capacity since channel surveys in the late 1970s, probably owing to a combination of (1) bed sediment removal and (2) transient river adjustments to large sediment volumes brought by floods such as those in 1964 and, to a lesser extent, 1996.
Channel change and bed-material transport in the Lower Chetco River, Oregon
Wallick, J. Rose; Anderson, Scott W.; Cannon, Charles; O'Connor, Jim E.
2009-01-01
The lower Chetco River is a wandering gravel-bed river flanked by abundant and large gravel bars formed of coarse bed-material sediment. The large gravel bars have been a source of commercial aggregate since the early twentieth century for which ongoing permitting and aquatic habitat concerns have motivated this assessment of historical channel change and sediment transport rates. Analysis of historical channel change and bed-material transport rates for the lower 18 kilometers show that the upper reaches of the study area are primarily transport zones, with bar positions fixed by valley geometry and active bars mainly providing transient storage of bed material. Downstream reaches, especially near the confluence of the North Fork Chetco River, have been zones of active sedimentation and channel migration.Multiple analyses, supported by direct measurements of bedload during winter 2008–09, indicate that since 1970 the mean annual flux of bed material into the study reach has been about 40,000–100,000 cubic meters per year. Downstream tributary input of bed-material sediment, probably averaging 5–30 percent of the influx coming into the study reach from upstream, is approximately balanced by bed-material attrition by abrasion. Probably very little bed material leaves the lower river under natural conditions, with most of the net influx historically accumulating in wider and more dynamic reaches, especially near the North Fork Chetco River confluence, 8 kilometers upstream from the Pacific Ocean.The year-to-year flux, however, varies tremendously. Some years probably have less than 3,000 cubic meters of bed-material entering the study area; by contrast, some high-flow years, such as 1982 and 1997, likely have more than 150,000 cubic meters entering the reach. For comparison, the estimated annual volume of gravel extracted from the lower Chetco River for commercial aggregate during 2000–2008 has ranged from 32,000 to 90,000 cubic meters and averaged about 59,000 cubic meters per year. Mined volumes probably exceeded 140,000 cubic meters per year for several years in the late 1970s.Repeat surveys and map analyses indicate a reduction in bar area and sinuosity between 1939 and 2008, chiefly in the period 1965–95. Repeat topographic and bathymetric surveys show channel incision for substantial portions of the study reach, with local areas of bed lowering by as much as 2 meters. A specific gage analysis at the upstream end of the study reach indicates that incision and narrowing followed aggradation culminating in the late 1970s. These observations are all consistent with a reduction of sediment supply relative to transport capacity since channel surveys in the late 1970s, probably owing to a combination of (1) bed-sediment removal and (2) transient river adjustments to large sediment volumes brought by floods such as those in 1964, and to a lesser extent, 1996.
Channel Change and Bed-Material Transport in the Lower Chetco River, Oregon
NASA Astrophysics Data System (ADS)
O'Connor, J. E.; Wallick, R.; Anderson, S.; Cannon, C.
2009-12-01
The Chetco River drains 914 square kilometers of the Klamath Mountains in far southwestern Oregon. For its lowermost 18 km, it is a wandering gravel-bed river flanked by abundant and large gravel bars formed of coarse bed-material sediment. The large gravel bars have been a source of commercial aggregate since the early twentieth century for which ongoing permitting and aquatic habitat concerns have motivated an assessment of historical channel change and sediment transport rates. Analysis of historical channel change and bed-material transport rates for the lower 18 kilometers show that the upper reaches of the study area are primarily transport zones, with bar positions fixed by valley geometry and active bars mainly providing transient storage of bed material. Downstream reaches, especially near the confluence of the North Fork Chetco River, have been zones of active sedimentation and channel migration. Multiple analyses, supported by direct measurements of bedload during winter 2008-09, indicate that since 1970 the mean annual flux of bed material into the study reach has been about 40,000-100,000 cubic meters per year. Downstream tributary input of bed-material sediment, probably averaging 5-30 percent of the influx coming into the study reach from upstream, is approximately balanced by bed-material attrition by abrasion. Probably very little bed material leaves the lower river under natural conditions, with most of the net influx historically accumulating in wider and more dynamic reaches, especially near the North Fork Chetco River confluence, 8 kilometers upstream from the Pacific Ocean. The year-to-year flux, however, varies tremendously. Some years probably have less than 3,000 cubic meters of bed-material entering the study area; by contrast, some high-flow years, such as 1982 and 1997, likely have more than 150,000 cubic meters entering the reach. For comparison, the estimated annual volume of gravel extracted from the lower Chetco River for commercial aggregate during 2000-2008 has ranged from 32,000 to 90,000 cubic meters and averaged about 59,000 cubic meters per year. Mined volumes probably exceeded 140,000 cubic meters per year for several years in the late 1970s. Repeat surveys and map analyses indicate a reduction in bar area and sinuosity between 1939 and 2008, chiefly in the period 1965-95. Repeat topographic and bathymetric surveys show channel incision for substantial portions of the study reach, with local areas of bed lowering by as much as 2 meters. A specific gage analysis at the upstream end of the study reach indicates that incision and narrowing followed aggradation culminating in the late 1970s. These observations are all consistent with a reduction of sediment supply relative to transport capacity since channel surveys in the late 1970s, probably owing to a combination of (1) bed-sediment removal and (2) transient river adjustments to large sediment volumes brought by floods such as those in 1964, and to a lesser extent, 1996.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morelock, Cody R.; Gallington, Leighanne C.; Wilkinson, Angus P., E-mail: angus.wilkinson@chemistry.gatech.edu
2015-02-15
With the goal of thermal expansion control, the synthesis and properties of Sc{sub 1−x}Al{sub x}F{sub 3} were investigated. The solubility limit of AlF{sub 3} in ScF{sub 3} at ∼1340 K is ∼50%. Solid solutions (x≤0.50) were characterized by synchrotron powder diffraction at ambient pressure between 100 and 900 K and at pressures <0.414 GPa while heating from 298 to 523 K. A phase transition from cubic to rhombohedral is observed. The transition temperature increases smoothly with Al{sup 3+} content, approaching 500 K at the solid solubility limit, and also upon compression at fixed Al{sup 3+} content. The slope of themore » pressure–temperature phase boundary is ∼0.5 K MPa{sup −1}, which is steep relative to that for most symmetry-lowering phase transitions in perovskites. The volume coefficient of thermal expansion (CTE) for the rhombohedral phase is strongly positive, but the cubic-phase CTE varies from negative (x<0.15) to near-zero (x=0.15) to positive (x>0.20) between ∼600 and 800 K. The cubic solid solutions elastically stiffen on heating, while Al{sup 3+} substitution causes softening at a given temperature. - Graphical abstract: The cubic-phase coefficient of thermal expansion for Sc{sub 1−x}Al{sub x}F{sub 3}(solubility limit ∼50% at ∼1340 K) becomes more positive with increased Al{sup 3+} substitution, but the average isothermal bulk modulus decreases (elastic softening). - Highlights: • The solubility limit of AlF{sub 3} in ScF{sub 3} at ∼1340 K is ∼50%. • The phase transition temperature of Sc{sub 1−x}Al{sub x}F{sub 3} increases smoothly with x. • The cubic-phase volume CTE varies from negative to positive with increasing x. • The cubic solid solutions elastically stiffen on heating. • Al{sup 3+} substitution causes softening at a given temperature.« less
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
2006-12-01
Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
Design and System Implications of a Family of Wideband HF Data Waveforms
2010-09-01
code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was
Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.
Huang, Yan; Wang, Wei; Wang, Liang
2018-04-01
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
The effects of kinesio taping on the color intensity of superficial skin hematomas: A pilot study.
Vercelli, Stefano; Colombo, Claudio; Tolosa, Francesca; Moriondo, Andrea; Bravini, Elisabetta; Ferriero, Giorgio; Francesco, Sartorio
2017-01-01
To analyze the effects of kinesio taping (KT) -applied with three different strains that induced or not the formation of skin creases (called convolutions)- on color intensity of post-surgical superficial hematomas. Single-blind paired study. Rehabilitation clinic. A convenience sample of 13 inpatients with post-surgical superficial hematomas. The tape was applied for 24 consecutive hours. Three tails of KT were randomly applied with different degrees of strain: none (SN); light (SL); and full longitudinal stretch (SF). We expected to obtain correct formation of convolutions with SL, some convolutions with SN, and no convolutions with SF. The change in color intensity of hematomas, measured by means of polar coordinates CIE L*a*b* using a validated and standardized digital images system. Applying KT to hematomas did not significantly change the color intensity in the central area under the tape (p > 0.05). There was a significant treatment effect (p < 0.05) under the edges of the tape, independently of the formation of convolutions (p > 0.05). The changes observed along the edges of the tape could be related to the formation of a pressure gradient between the KT and the adjacent area, but were not dependent on the formation of skin convolutions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chinese Herbal Medicine Image Recognition and Retrieval by Convolutional Neural Network
Sun, Xin; Qian, Huinan
2016-01-01
Chinese herbal medicine image recognition and retrieval have great potential of practical applications. Several previous studies have focused on the recognition with hand-crafted image features, but there are two limitations in them. Firstly, most of these hand-crafted features are low-level image representation, which is easily affected by noise and background. Secondly, the medicine images are very clean without any backgrounds, which makes it difficult to use in practical applications. Therefore, designing high-level image representation for recognition and retrieval in real world medicine images is facing a great challenge. Inspired by the recent progress of deep learning in computer vision, we realize that deep learning methods may provide robust medicine image representation. In this paper, we propose to use the Convolutional Neural Network (CNN) for Chinese herbal medicine image recognition and retrieval. For the recognition problem, we use the softmax loss to optimize the recognition network; then for the retrieval problem, we fine-tune the recognition network by adding a triplet loss to search for the most similar medicine images. To evaluate our method, we construct a public database of herbal medicine images with cluttered backgrounds, which has in total 5523 images with 95 popular Chinese medicine categories. Experimental results show that our method can achieve the average recognition precision of 71% and the average retrieval precision of 53% over all the 95 medicine categories, which are quite promising given the fact that the real world images have multiple pieces of occluded herbal and cluttered backgrounds. Besides, our proposed method achieves the state-of-the-art performance by improving previous studies with a large margin. PMID:27258404
Measurement of cardiac output from dynamic pulmonary circulation time CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Scalzetti, Ernest M.
Purpose: To introduce a method of estimating cardiac output from the dynamic pulmonary circulation time CT that is primarily used to determine the optimal time window of CT pulmonary angiography (CTPA). Methods: Dynamic pulmonary circulation time CT series, acquired for eight patients, were retrospectively analyzed. The dynamic CT series was acquired, prior to the main CTPA, in cine mode (1 frame/s) for a single slice at the level of the main pulmonary artery covering the cross sections of ascending aorta (AA) and descending aorta (DA) during the infusion of iodinated contrast. The time series of contrast changes obtained for DA,more » which is the downstream of AA, was assumed to be related to the time series for AA by the convolution with a delay function. The delay time constant in the delay function, representing the average time interval between the cross sections of AA and DA, was determined by least square error fitting between the convoluted AA time series and the DA time series. The cardiac output was then calculated by dividing the volume of the aortic arch between the cross sections of AA and DA (estimated from the single slice CT image) by the average time interval, and multiplying the result by a correction factor. Results: The mean cardiac output value for the six patients was 5.11 (l/min) (with a standard deviation of 1.57 l/min), which is in good agreement with the literature value; the data for the other two patients were too noisy for processing. Conclusions: The dynamic single-slice pulmonary circulation time CT series also can be used to estimate cardiac output.« less
NASA Astrophysics Data System (ADS)
Olory Agomma, R.; Vázquez, C.; Cresson, T.; De Guise, J.
2018-02-01
Most algorithms to detect and identify anatomical structures in medical images require either to be initialized close to the target structure, or to know that the structure is present in the image, or to be trained on a homogeneous database (e.g. all full body or all lower limbs). Detecting these structures when there is no guarantee that the structure is present in the image, or when the image database is heterogeneous (mixed configurations), is a challenge for automatic algorithms. In this work we compared two state-of-the-art machine learning techniques in order to determine which one is the most appropriate for predicting targets locations based on image patches. By knowing the position of thirteen landmarks points, labelled by an expert in EOS frontal radiography, we learn the displacement between salient points detected in the image and these thirteen landmarks. The learning step is carried out with a machine learning approach by exploring two methods: Convolutional Neural Network (CNN) and Random Forest (RF). The automatic detection of the thirteen landmarks points in a new image is then obtained by averaging the positions of each one of these thirteen landmarks estimated from all the salient points in the new image. We respectively obtain for CNN and RF, an average prediction error (both mean and standard deviation in mm) of 29 +/-18 and 30 +/- 21 for the thirteen landmarks points, indicating the approximate location of anatomical regions. On the other hand, the learning time is 9 days for CNN versus 80 minutes for RF. We provide a comparison of the results between the two machine learning approaches.
NASA Astrophysics Data System (ADS)
Mandel, Kaisey S.; Scolnic, Daniel M.; Shariff, Hikmatali; Foley, Ryan J.; Kirshner, Robert P.
2017-06-01
Conventional Type Ia supernova (SN Ia) cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (M B versus B - V) slope {β }{int} differs from the host galaxy dust law R B , this convolution results in a specific curve of mean extinguished absolute magnitude versus apparent color. The derivative of this curve smoothly transitions from {β }{int} in the blue tail to R B in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope {β }{app} between {β }{int} and R B . We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a data set of SALT2 optical light curve fits of 248 nearby SNe Ia at z< 0.10. The conventional linear fit gives {β }{app}≈ 3. Our model finds {β }{int}=2.3+/- 0.3 and a distinct dust law of {R}B=3.8+/- 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ˜0.10 mag in the tails of the apparent color distribution. Finally, we extend our model to examine the SN Ia luminosity-host mass dependence in terms of intrinsic and dust components.
Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter
2017-11-01
Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Frame prediction using recurrent convolutional encoder with residual learning
NASA Astrophysics Data System (ADS)
Yue, Boxuan; Liang, Jun
2018-05-01
The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.
A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
Forests of Vermont and New Hampshire 2012
Randall S. Morin; Chuck J. Barnett; Brett J. Butler; Susan J. Crocker; Grant M. Domke; Mark H. Hansen; Mark A. Hatfield; Jonathan Horton; Cassandra M. Kurtz; Tonya W. Lister; Patrick D. Miles; Mark D. Nelson; Ronald J. Piva; Sandy Wilmot; Richard H. Widmann; Christopher W. Woodall; Robert. Zaino
2015-01-01
The first full remeasurement of the annual inventory of the forests of Vermont and New Hampshire was completed in 2012 and covers nearly 9.5 million acres of forest land, with an average volume of nearly 2,300 cubic feet per acre. The data in this report are based on visits to 1,100 plots located across Vermont and 1,091 plots located across New Hampshire. Forest land...
Susan J. Crocker; Charles J. Barnett; Brett J. Butler; Mark A. Hatfield; Cassandra M. Kurtz; Tonya W. Lister; Dacia M. Meneguzzo; Patrick D. Miles; Randall S. Morin; Mark D. Nelson; Ronald J. Piva; Rachel Riemann; James E. Smith; Christopher W. Woodall; William. Zipse
2017-01-01
The second full annual inventory of New Jerseyâs forests reports more than 2.0 million acres of forest land and 77 tree species. Forest land is dominated by oak/hickory forest types in the north and pitch pine forest types in the south. The volume of growing stock on timberland has been rising since 1956 and currently totals 3.3 billion cubic feet. Average annual net...
Cottonwood Plantation Growth Through 20 Years
Roger M. Krinard; Robert L. Johnson
1984-01-01
At age 20 survival of unthinned cottonwood (Populusdeltoides Bartr. ex Marsh.) planted on medium-textured soil at spacings of 4 by 9, 8 by 9, 12 by 12, and 16 by 18 feet was 10, 17, 30, and 62 percent, and average diameters were 10.6, 11.8, 12.6, and 13.7 inches, respectively. Depending on spacing and diameter threshold, -cubic volume mean annual increment peaked at...
Incidence and impact of damage to East Oklahoma's timber, 1986
Stephen Clarke; Clair Redmond; Dennis May; Dale Starkey
1994-01-01
An average of 57.4 million cubic feet of timber was lost annually to mortality and cull from 1976 to 1986 in east Oklahoma's 4.75 million acres of commercial forest land, resulting in a monetary loss of $7.2 million per year. Hardwoods generally had more damage than softwoods, with upland hardwoods accounting for 63 percent of cull volume loss. Of the ownership...
Method for making nanomaterials
Fan, Hongyou; Wu, Huimeng
2013-06-04
A method of making a nanostructure by preparing a face centered cubic-ordered metal nanoparticle film from metal nanoparticles, such as gold and silver nanoparticles, exerting a hydrostatic pressure upon the film at pressures of several gigapascals, followed by applying a non-hydrostatic stress perpendicularly at a pressure greater than approximately 10 GPA to form an array of nanowires with individual nanowires having a relatively uniform length, average diameter and density.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
Efficient airport detection using region-based fully convolutional neural networks
NASA Astrophysics Data System (ADS)
Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao
2018-04-01
This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.
NASA Astrophysics Data System (ADS)
Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong
2018-03-01
The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
Definition of the Spatial Resolution of X-Ray Microanalysis in Thin Foils
NASA Technical Reports Server (NTRS)
Williams, D. B.; Michael, J. R.; Goldstein, J. I.; Romig, A. D., Jr.
1992-01-01
The spatial resolution of X-ray microanalysis in thin foils is defined in terms of the incident electron beam diameter and the average beam broadening. The beam diameter is defined as the full width tenth maximum of a Gaussian intensity distribution. The spatial resolution is calculated by a convolution of the beam diameter and the average beam broadening. This definition of the spatial resolution can be related simply to experimental measurements of composition profiles across interphase interfaces. Monte Carlo calculations using a high-speed parallel supercomputer show good agreement with this definition of the spatial resolution and calculations based on this definition. The agreement is good over a range of specimen thicknesses and atomic number, but is poor when excessive beam tailing distorts the assumed Gaussian electron intensity distributions. Beam tailing occurs in low-Z materials because of fast secondary electrons and in high-Z materials because of plural scattering.
Key parameters governing the densification of cubic-Li7La3Zr2O12 Li+ conductors
NASA Astrophysics Data System (ADS)
Yi, Eongyu; Wang, Weimin; Kieffer, John; Laine, Richard M.
2017-06-01
Cubic-Li7La3Zr2O12 (LLZO) is regarded as one of the most promising solid electrolytes for the construction of inherently safe, next generation all-solid-state Li batteries. Unfortunately, sintering these materials to full density with controlled grain sizes, mechanical and electrochemical properties relies on energy and equipment intensive processes. In this work, we elucidate key parameters dictating LLZO densification by tracing the compositional and structural changes during processing calcined and ball-milled Al3+ doped LLZO powders. We find that the powders undergo ion (Li+/H+) exchange during room temperature processing, such that on heating, the protonated LLZO lattice collapses and crystallizes to its constituent oxides, leading to reaction driven densification at < 1000 °C, prior to sintering of LLZO grains at higher temperatures. It is shown that small particle sizes and protonation cannot be decoupled, and actually aid densification. We conclude that using fully decomposed nanoparticle mixtures, as obtained by liquid-feed flame spray pyrolysis, provides an ideal approach to use high surface and reaction energy to drive densification, resulting in pressureless sintering of Ga3+ doped LLZO thin films (25 μm) at 1130 °C/0.3 h to ideal microstructures (95 ± 1% density, 1.2 ± 0.2 μm average grain size) normally accessible only by pressure-assisted sintering. Such films offer both high ionic conductivity (1.3 ± 0.1 mS cm-1) and record low ionic area specific resistance (2 Ω cm2).
Periodicity analysis of tourist arrivals to Banda Aceh using smoothing SARIMA approach
NASA Astrophysics Data System (ADS)
Miftahuddin, Helida, Desri; Sofyan, Hizir
2017-11-01
Forecasting the number of tourist arrivals who enters a region is needed for tourism businesses, economic and industrial policies, so that the statistical modeling needs to be conducted. Banda Aceh is the capital of Aceh province more economic activity is driven by the services sector, one of which is the tourism sector. Therefore, the prediction of the number of tourist arrivals is needed to develop further policies. The identification results indicate that the data arrival of foreign tourists to Banda Aceh to contain the trend and seasonal nature. Allegedly, the number of arrivals is influenced by external factors, such as economics, politics, and the holiday season caused the structural break in the data. Trend patterns are detected by using polynomial regression with quadratic and cubic approaches, while seasonal is detected by a periodic regression polynomial with quadratic and cubic approach. To model the data that has seasonal effects, one of the statistical methods that can be used is SARIMA (Seasonal Autoregressive Integrated Moving Average). The results showed that the smoothing, a method to detect the trend pattern is cubic polynomial regression approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 70.52. While the method for detecting the seasonal pattern is a periodic regression polynomial cubic approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 73.37. Furthermore, the best model to predict the number of foreign tourist arrivals to Banda Aceh in 2017 to 2018 is SARIMA (0,1,1)(1,1,0) with MAPE is 26%.
Zhai, Xiaolong; Jelfs, Beth; Chan, Rosa H. M.; Tin, Chung
2017-01-01
Hand movement classification based on surface electromyography (sEMG) pattern recognition is a promising approach for upper limb neuroprosthetic control. However, maintaining day-to-day performance is challenged by the non-stationary nature of sEMG in real-life operation. In this study, we propose a self-recalibrating classifier that can be automatically updated to maintain a stable performance over time without the need for user retraining. Our classifier is based on convolutional neural network (CNN) using short latency dimension-reduced sEMG spectrograms as inputs. The pretrained classifier is recalibrated routinely using a corrected version of the prediction results from recent testing sessions. Our proposed system was evaluated with the NinaPro database comprising of hand movement data of 40 intact and 11 amputee subjects. Our system was able to achieve ~10.18% (intact, 50 movement types) and ~2.99% (amputee, 10 movement types) increase in classification accuracy averaged over five testing sessions with respect to the unrecalibrated classifier. When compared with a support vector machine (SVM) classifier, our CNN-based system consistently showed higher absolute performance and larger improvement as well as more efficient training. These results suggest that the proposed system can be a useful tool to facilitate long-term adoption of prosthetics for amputees in real-life applications. PMID:28744189
Automated embolic signal detection using Deep Convolutional Neural Network.
Sombune, Praotasna; Phienphanich, Phongphan; Phuechpanpaisal, Sutanya; Muengtaweepongsa, Sombat; Ruamthanthong, Anuchit; Tantibundhit, Charturong
2017-07-01
This work investigated the potential of Deep Neural Network in detection of cerebral embolic signal (ES) from transcranial Doppler ultrasound (TCD). The resulting system is aimed to couple with TCD devices in diagnosing a risk of stroke in real-time with high accuracy. The Adaptive Gain Control (AGC) approach developed in our previous study is employed to capture suspected ESs in real-time. By using spectrograms of the same TCD signal dataset as that of our previous work as inputs and the same experimental setup, Deep Convolutional Neural Network (CNN), which can learn features while training, was investigated for its ability to bypass the traditional handcrafted feature extraction and selection process. Extracted feature vectors from the suspected ESs are later determined whether they are of an ES, artifact (AF) or normal (NR) interval. The effectiveness of the developed system was evaluated over 19 subjects going under procedures generating emboli. The CNN-based system could achieve in average of 83.0% sensitivity, 80.1% specificity, and 81.4% accuracy, with considerably much less time consumption in development. The certainly growing set of training samples and computational resources will contribute to high performance. Besides having potential use in various clinical ES monitoring settings, continuation of this promising study will benefit developments of wearable applications by leveraging learnable features to serve demographic differentials.
Zhai, Xiaolong; Jelfs, Beth; Chan, Rosa H M; Tin, Chung
2017-01-01
Hand movement classification based on surface electromyography (sEMG) pattern recognition is a promising approach for upper limb neuroprosthetic control. However, maintaining day-to-day performance is challenged by the non-stationary nature of sEMG in real-life operation. In this study, we propose a self-recalibrating classifier that can be automatically updated to maintain a stable performance over time without the need for user retraining. Our classifier is based on convolutional neural network (CNN) using short latency dimension-reduced sEMG spectrograms as inputs. The pretrained classifier is recalibrated routinely using a corrected version of the prediction results from recent testing sessions. Our proposed system was evaluated with the NinaPro database comprising of hand movement data of 40 intact and 11 amputee subjects. Our system was able to achieve ~10.18% (intact, 50 movement types) and ~2.99% (amputee, 10 movement types) increase in classification accuracy averaged over five testing sessions with respect to the unrecalibrated classifier. When compared with a support vector machine (SVM) classifier, our CNN-based system consistently showed higher absolute performance and larger improvement as well as more efficient training. These results suggest that the proposed system can be a useful tool to facilitate long-term adoption of prosthetics for amputees in real-life applications.
NASA Astrophysics Data System (ADS)
Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan
2018-04-01
In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.
Saleh, Khaled; Hossny, Mohammed; Nahavandi, Saeid
2018-06-12
Traffic collisions between kangaroos and motorists are on the rise on Australian roads. According to a recent report, it was estimated that there were more than 20,000 kangaroo vehicle collisions that occurred only during the year 2015 in Australia. In this work, we are proposing a vehicle-based framework for kangaroo detection in urban and highway traffic environment that could be used for collision warning systems. Our proposed framework is based on region-based convolutional neural networks (RCNN). Given the scarcity of labeled data of kangaroos in traffic environments, we utilized our state-of-the-art data generation pipeline to generate 17,000 synthetic depth images of traffic scenes with kangaroo instances annotated in them. We trained our proposed RCNN-based framework on a subset of the generated synthetic depth images dataset. The proposed framework achieved a higher average precision (AP) score of 92% over all the testing synthetic depth image datasets. We compared our proposed framework against other baseline approaches and we outperformed it with more than 37% in AP score over all the testing datasets. Additionally, we evaluated the generalization performance of the proposed framework on real live data and we achieved a resilient detection accuracy without any further fine-tuning of our proposed RCNN-based framework.
Pelvic artery calcification detection on CT scans using convolutional neural networks
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Lu, Le; Yao, Jianhua; Bagheri, Mohammadhadi; Summers, Ronald M.
2017-03-01
Artery calcification is observed commonly in elderly patients, especially in patients with chronic kidney disease, and may affect coronary, carotid and peripheral arteries. Vascular calcification has been associated with many clinical outcomes. Manual identification of calcification in CT scans requires substantial expert interaction, which makes it time-consuming and infeasible for large-scale studies. Many works have been proposed for coronary artery calcification detection in cardiac CT scans. In these works, coronary artery extraction is commonly required for calcification detection. However, there are few works about abdominal or pelvic artery calcification detection. In this work, we present a method for automatic pelvic artery calcification detection on CT scan. This method uses the recent advanced faster region-based convolutional neural network (R-CNN) to directly identify artery calcification without a need for artery extraction since pelvic artery extraction itself is challenging. Our method first generates category-independent region proposals for each slice of the input CT scan using region proposal networks (RPN). Then, each region proposal is jointly classified and refined by softmax classifier and bounding box regressor. We applied the detection method to 500 images from 20 CT scans of patients for evaluation. The detection system achieved a 77.4% average precision and a 85% sensitivity at 1 false positive per image.
High-performance image processing on the desktop
NASA Astrophysics Data System (ADS)
Jordan, Stephen D.
1996-04-01
The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.
Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug
2018-04-30
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Multithreaded implicitly dealiased convolutions
NASA Astrophysics Data System (ADS)
Roberts, Malcolm; Bowman, John C.
2018-03-01
Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hyper-Raman and Raman scattering from the polar modes of PbMg1/3Nb2/3O3.
Hehlen, B; Amouri, A; Al-Zein, A; Khemakhem, H
2014-01-08
Microhyper-Raman spectroscopy of PbMg(1/3)Nb(2/3)O(3) (PMN) single crystal is performed at room temperature. The use of an optical microscope working in backscattering geometry significantly reduces the LO signal, highlighting thereby the weak contributions underneath. We clearly identify the highest frequency transverse optic mode (TO3) in addition to the previously observed soft TO-doublet at low frequency and TO2 at intermediate frequency. TO3 exhibits strong inhomogeneous broadening but perfectly fulfils the hyper-Raman cubic selection rules. The analysis shows that hyper-Raman spectroscopy is sensitive to all the vibrations of the average cubic Pm3¯m symmetry group of PMN, the three polar F1u- and the silent F2u-symmetry modes. All these vibrations can be identified in the Raman spectra alongside other vibrational bands likely arising from symmetry breaking in polar nanoregions.
Myette, C.F.
1991-01-01
Numerical-model simulations of ground-water flow near the vicinity of the tailings basin indicate that, if areal recharge were doubled during spring and fall, water levels in wells could average about 4 feet above 1983 levels during these periods. Model results indicate that water levels in the tailings could possibly remain about 5 feet above 1983 levels at the end of the year. Water levels in the tailings at the outlet of the basin could be about 1 foot above 1983 levels during the spring stress period and could be nearly 1.5 feet above 1983 levels during the fall stress period. Under these hypothetical climatic conditions, ground-water contribution to discharge at the outlet could be about 50 cubic feet per second during spring and about 80 cubic feet per second during fall.
House, L.B.
1995-01-01
The mass of PCB's transported from the lake in streamflow during 1987-88 was calculated to be 110 kilograms annually. The PCB's transport rate decreased 50 percent from 1987 to 1988, for the period April through September. Transport of PCB's was greatest during April and May of each year. The average flux rate of PCB's into the water column from the bottom sediment in the lake was estimated to be 1.2 milligrams per square meter per day. The PCB's load seems to increase at river discharges greater than 212 cubic meters per second. This increase in PCB's load might be caused by resuspension of PCB's-contaminated bottom-sediment deposits. There was little variation in PCB's load at flows less than 170 cubic meters per second. The bottom sediments are a continuing source of PCB's to Little Lake Butte des Morts and the lower Fox River.
Cycle-time equation for the Koller K300 cable yarder operating on steep slopes in the Northeast
Neil K. Huyler; Chris B. LeDoux
1997-01-01
Describes a delay-free-cycle time equation for the Koller K300 skyline yarder operating on steep slopes in the Northeast. Using the equation, the average delay-free-cycle time was 5.72 minutes. This means that about 420 cubic feet of material per hour can be produced. The important variables used in the equation were slope yarding distance, lateral yarding distance,...
NASA Astrophysics Data System (ADS)
Dzevin, Ievgenij M.; Mekhed, Alexander A.
2017-03-01
Samples of Fe-Al-C alloys of varying composition were synthesized under high pressures and temperatures. From X-ray analysis data, only K-phase with usual for it average parameter of elemental lattice cell, a = 0.376 nm, carbide Fe3C and cubic diamond reflexes were present before and after cooling to the temperature of liquid nitrogen.
Non-pulp utilization of above-ground biomass of mixed-species forests of small trees
P. Koch
1982-01-01
This soulution propose to rehabilitate annually- by clear felling, site preparation, and planting- 25,000 acres of level to rolling land averaging about490 cubic feet per acre of stemwood in small hardwood trees 5 inches in diameter at breast height (dbh) and larger, and of many species, plus all equal volume of above-ground biomass in stembark and tops, and in trees...
Weight-Volume relationships of Aspen and Winter-Cut Black Spruce Pulpwood in Northern Minnesota
David C. Lothner; Richard M. Marden; Edwin Kallio
1974-01-01
Seasonal weight-volume relationships were determined for rough (bark on) aspen and black spruce 100-inch pulpwood that was delivered withing 1 week after cutting in northern Minnesota during 1971-72. For aspen, the weight of wood and bark per cubic foot of wood averaged 56 pounds in the winter and 61 pounds in the summer. This relationshipfor winter-cut black spruce...
NASA Astrophysics Data System (ADS)
Etro, Federico; Stepanova, Elena
2018-09-01
We provide evidence of a cubic law of art prices that hints to a general pattern for the distribution of artistic talent. The persistence across heterogeneous markets from historical ones to contemporary art auctions of a power law in the distribution of the average price per artist suggests the possibility of a universal law for talent distribution. We explore scale-free networks of teacher-students to investigate the diffusion of talent over time.
Off-resonance artifacts correction with convolution in k-space (ORACLE).
Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne
2012-06-01
Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.
Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung
2018-04-23
In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Convolutional virtual electric field for image segmentation using active contours.
Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden
2014-01-01
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.
NASA Astrophysics Data System (ADS)
Chesoh, S.; Lim, A.; Luangthuvapranit, C.
2018-04-01
This study aimed to cluster and to quantify the wild-caught fingerlings nearby thermal power plant. Samples were monthly collected by bongo nets from four upstream sites of the Na Thap tidal river in Thailand from 2008 to 2013. Each caught species was identified, counted and calculated density in term of individuals per 1,000 cubic meters. A total of 45 aquatic animal fingerlings was commonly trapped in the average density of 2,652 individuals per 1,000 cubic meters of water volume (1,235–4,570). The results of factor analysis revealed that factor 1 was represented by the largest group of freshwater fish species, factors 2 represented a medium-sized group of mesohaline species, factor 3 represented several brackish species and factor 4 was a few euryhaline species. All four factor reached maximum levels during May to October. Total average numbers of fish fingerling caught at the outflow showed greater than those of other sampling sites. The impact of heated pollution from power plant effluents did not clearly detected. Overall water quality according the Thailand Surface Water Quality Standards Coastal tidal periodic and seasonal runoff phenomena exhibit influentially factors. Continuous ecological monitoring is strongly recommended.
NASA Astrophysics Data System (ADS)
Chokprasombat, Komkrich; Pinitsoontorn, Supree; Maensiri, Santi
2016-05-01
Magnetic properties of Fe-Co-Ni ternary alloys could be altered by changing of the particle size, elemental compositions, and crystalline structures. In this work, Fe50Co50-xNix nanoparticles (x=10, 20, 40, and 50) were prepared by the novel chemical reduction process. Hydrazine monohydrate was used as a reducing agent under the concentrated basic condition with the presence of poly(vinylpyrrolidone). We found that the nanoparticles were composed of Fe, Co and Ni with compositions according to the molar ratio of the metal sources. Interestingly, the particles were well-crystalline at the as-prepared state without post-annealing at high temperature. Increasing Ni content resulted in phase transformation from body centered cubic (bcc) to face centered cubic (fcc). For the fcc phase, the average particle size decreased when increased the Ni content; the Fe50Ni50 nanoparticles had the smallest average size with the narrowest size distribution. In additions, the particles exhibited ferromagnetic properties at room temperature with the coercivities higher than 300 Oe, and the saturation magnetiation decreased with increasing Ni content. These results suggest that the structural and magnetic properties of Fe-Co-Ni alloys could be adjusted by varying the Ni content.
NASA Astrophysics Data System (ADS)
Kuri, G.; Degueldre, C.; Bertsch, J.; Döbeli, M.
2010-06-01
The crystal structure and local atom arrangements surrounding Zr atoms were determined for a helium implanted cubic stabilized zirconia (CSZ) using X-ray diffraction (XRD) and extended X-ray absorption fine structure (EXAFS) spectroscopy, respectively, measured at glancing angles. The implanted specimen was prepared at a helium fluence of 2 × 10 16 cm -2 using He + beams at two energies (2.54 and 2.74 MeV) passing through a 8.0 μm Al absorber foil. XRD results identified the formation of a new rhombohedral phase in the helium embedded layer, attributed to internal stress as a result of expansion of the CSZ-lattice. Zr K-edge EXAFS data suggested loss of crystallinity in the implanted lattice and disorder of the Zr atoms environment. EXAFS Fourier transforms analysis showed that the average first-shell radius of the Zr sbnd O pair in the implanted sample was slightly larger than that of the CSZ standard. Common general disorder features were explained by rhombohedral type short-range ordered clusters. The average structural parameters estimated from the EXAFS data of unimplanted and implanted CSZ are compared and discussed. Potential of EXAFS as a local probe of atomic-scale structural modifications induced by helium implantation in CSZ is demonstrated.
NASA Technical Reports Server (NTRS)
Doland, G. D.
1970-01-01
Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.
Monitoring a Silent Phase Transition in CH 3NH 3PbI 3 Solar Cells via Operando X-ray Diffraction
Schelhas, Laura T.; Christians, Jeffrey A.; Berry, Joseph J.; ...
2016-10-13
The relatively modest temperature of the tetragonal-to-cubic phase transition in CH 3NH 3PbI 3 perovskite is likely to occur during real world operation of CH 3NH 3PbI 3 solar cells. In this work, we simultaneously monitor the structural phase transition of the active layer along with solar cell performance as a function of the device operating temperature. The tetragonal to cubic phase transition is observed in the working device to occur reversibly at temperatures between 60.5 and 65.4 degrees C. In these operando measurements, no discontinuity in the device performance is observed, indicating electronic behavior that is insensitive to themore » structural phase transition. Here, this decoupling of device performance from the change in long-range order across the phase transition suggests that the optoelectronic properties are primarily determined by the local structure in CH 3NH 3PbI 3. That is, while the average crystal structure as probed by X-ray diffraction shows a transition from tetragonal to cubic, the local structure generally remains well characterized by uncorrelated, dynamic octahedral rotations that order at elevated temperatures but are unchanged locally.« less
Monitoring a Silent Phase Transition in CH 3NH 3PbI 3 Solar Cells via Operando X-ray Diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schelhas, Laura T.; Christians, Jeffrey A.; Berry, Joseph J.
The relatively modest temperature of the tetragonal-to-cubic phase transition in CH 3NH 3PbI 3 perovskite is likely to occur during real world operation of CH 3NH 3PbI 3 solar cells. In this work, we simultaneously monitor the structural phase transition of the active layer along with solar cell performance as a function of the device operating temperature. The tetragonal to cubic phase transition is observed in the working device to occur reversibly at temperatures between 60.5 and 65.4 degrees C. In these operando measurements, no discontinuity in the device performance is observed, indicating electronic behavior that is insensitive to themore » structural phase transition. Here, this decoupling of device performance from the change in long-range order across the phase transition suggests that the optoelectronic properties are primarily determined by the local structure in CH 3NH 3PbI 3. That is, while the average crystal structure as probed by X-ray diffraction shows a transition from tetragonal to cubic, the local structure generally remains well characterized by uncorrelated, dynamic octahedral rotations that order at elevated temperatures but are unchanged locally.« less
Brabets, Timothy P.
2001-01-01
Flow data were collected from two adjacent rivers in Yukon?Charley Rivers National Preserve, Alaska?the Nation River (during 1991?2000) and the Kandik River (1994?2000)?and from the Yukon River (1950?2000) at Eagle, Alaska, upstream from the boundary of the preserve. These flow records indicate that most of the runoff from these rivers occurs from May through September and that the average monthly discharge during this period ranges from 1,172 to 2,210 cubic feet per second for the Nation River, from 1,203 to 2,633 cubic feet per second for the Kandik River, and from 112,000 to 224,000 cubic feet per second for the Yukon River. Water-quality data were collected for the Nation River and several of its tributaries from 1991 to 1992 and for the Yukon River at Eagle from 1950 to 1994. Three tributaries to the Nation River (Waterfall Creek, Cathedral Creek, and Hard Luck Creek) have relatively high concentrations of calcium, magnesium, and sulfate. These three watersheds are underlain predominantly by Paleozoic and Precambrian rocks. The Yukon River transports 33,000,000 tons of suspended sediment past Eagle each year. Reflecting the inputs from its major tributaries, the water of the Yukon River at Eagle is dominated by calcium?magnesium bicarbonate.
NASA Astrophysics Data System (ADS)
Hu, Tao; Wang, Zongrong; Ma, Ning; Du, Piyi
2017-12-01
PbZr0.52Ti0.48O3 thin films containing hexagonal and cubic Ag nanoparticles (Ag NPs) of various sizes were prepared using the sol-gel technique. During the aging process, Ag ions were photo-reduced to form hexagonal Ag NPs. These NPs were uniform in size, and their uniformity was maintained in the thin films during the heat treatment process. Both the total volume and average size of the hexagonal Ag NPs increased with an increasing Ag ion concentration from 0.02 to 0.08 mol l-1. Meanwhile, the remaining Ag ions were reduced to form unstable Ag-Pb alloy particles with Pb ions during the early heating stage. During subsequent heat treatment, these alloys decomposed to form cubic Ag NPs in the thin films. The absorption range of the thin films, quantified as the full width at half maximum in the ultraviolet-visible absorption spectrum, expanded from 6.3 × 1013 Hz (390-425 nm) to 8.4 × 1013 Hz (383-429 nm) as the Ag NPs/PZT ratio increased from 0.2 to 0.8. This work provides an effective way to broaden the absorption range and enhance the optical properties of such films.
Structural and magnetic properties of sol-gel derived CaFe2O4 nanoparticles
NASA Astrophysics Data System (ADS)
Das, Arnab Kumar; Govindaraj, Ramanujan; Srinivasan, Ananthakrishnan
2018-04-01
Calcium ferrite nanoparticles with average crystallite size of ∼11 nm have been synthesized by sol-gel method by mixing calcium and ferric nitrates in stoichiometric ratio in the presence of ethylene glycol. As-synthesized nanoparticles were annealed at different temperatures and their structural and magnetic properties have been evaluated. X-ray diffraction studies showed that unlike most ferrites, as-synthesized cubic calcium ferrite showed a slow transformation to orthorhombic structure when annealed above 400 °C. Single phase orthorhombic CaFe2O4 was obtained upon annealing at 1100 °C. Divergence of zero field cooled and field cooled magnetization curves at low temperatures indicated superparamagnetic behavior in cubic calcium ferrite particles. Superparamagnetism persisted in cubic samples annealed up to 500 °C. As-synthesized nanoparticles heat treated at 1100 °C exhibited mixed characteristics of antiferromagnetic and paramagnetic grains with saturation magnetization of 0.4 emu/g whereas nanoparticles calcined at 400 °C exhibited superparamagnetic characteristics with saturation magnetization of 22.92 emu/g. An antiferromagnetic to paramagnetic transition was observed between 170 and 190 K in the sample annealed at 1100 °C, which was further confirmed by Mössbauer studies carried out at different temperatures across the transition.
NASA Astrophysics Data System (ADS)
Saeidfirozeh, Homa; Shafiekhani, Azizollah; Beheshti-Marnani, Amirkhosro; Askari, Mohammad Bagher
2018-06-01
A new compound Mn0.9Co0.1Al2O4 nanowires were synthesized by thermal method. The resulting powder samples were characterized by scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDX), and X-ray diffraction (XRD). We found that a set of phase transformation occurred during the process. Eventually, five phases including three spinal phases, the corundum (á-Al2O3) and MnO were formed at 1100 °C.As dominant morphology, the cubic galaxite nanowires were identified by X-ray analysis. Moreover, X-ray analysis showed that Mn3O4 and Co3O4 nanoparticles were formed in tetragonal and cubic symmetry respectively. The SEM image revealed that a dominate morphology of product has cubic nanowires shape with an average diameter in range 38-43 nm. Furthermore, we observed that influence of temperature was very important in the nanowire formation process. Electrochemical hydrogen evolution reaction (HER) of synthetic composite was evaluated and the over potential of HER was calculated about 110 mV with low Tafel slope equal to 42 mV dec-1, which was comparable with amounts reported transition metal dichalcogenides with satisfying durability.
MR-based synthetic CT generation using a deep convolutional neural network method.
Han, Xiao
2017-04-01
Interests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR-only radiotherapy also simplifies clinical workflow and avoids uncertainties in aligning MR with CT. Methods, however, are needed to derive CT-equivalent representations, often known as synthetic CT (sCT), from patient MR images for dose calculation and DRR-based patient positioning. Synthetic CT estimation is also important for PET attenuation correction in hybrid PET-MR systems. We propose in this work a novel deep convolutional neural network (DCNN) method for sCT generation and evaluate its performance on a set of brain tumor patient images. The proposed method builds upon recent developments of deep learning and convolutional neural networks in the computer vision literature. The proposed DCNN model has 27 convolutional layers interleaved with pooling and unpooling layers and 35 million free parameters, which can be trained to learn a direct end-to-end mapping from MR images to their corresponding CTs. Training such a large model on our limited data is made possible through the principle of transfer learning and by initializing model weights from a pretrained model. Eighteen brain tumor patients with both CT and T1-weighted MR images are used as experimental data and a sixfold cross-validation study is performed. Each sCT generated is compared against the real CT image of the same patient on a voxel-by-voxel basis. Comparison is also made with respect to an atlas-based approach that involves deformable atlas registration and patch-based atlas fusion. The proposed DCNN method produced a mean absolute error (MAE) below 85 HU for 13 of the 18 test subjects. The overall average MAE was 84.8 ± 17.3 HU for all subjects, which was found to be significantly better than the average MAE of 94.5 ± 17.8 HU for the atlas-based method. The DCNN method also provided significantly better accuracy when being evaluated using two other metrics: the mean squared error (188.6 ± 33.7 versus 198.3 ± 33.0) and the Pearson correlation coefficient(0.906 ± 0.03 versus 0.896 ± 0.03). Although training a DCNN model can be slow, training only need be done once. Applying a trained model to generate a complete sCT volume for each new patient MR image only took 9 s, which was much faster than the atlas-based approach. A DCNN model method was developed, and shown to be able to produce highly accurate sCT estimations from conventional, single-sequence MR images in near real time. Quantitative results also showed that the proposed method competed favorably with an atlas-based method, in terms of both accuracy and computation speed at test time. Further validation on dose computation accuracy and on a larger patient cohort is warranted. Extensions of the method are also possible to further improve accuracy or to handle multi-sequence MR images. © 2017 American Association of Physicists in Medicine.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
1992-12-01
views expressed in this thesis are those of the author end do net reflect olicsia policy or pokletsm of the Deperteaset of Defame or the US...utempl u v= cncd (2,1,6,G64,u,zeros(l,12));%Convolutional encoding mm=bm(2,v); %Binary to M-ary conversion clear v u; mm=inter(50,200,mm);%Interleaving (50...save result err B. CNCD.X (CONVOLUTIONAL ENCODER FUNCTION) function (v,vr] - cncd (n,k,m,Gr,u,r) % CONVOLUTIONAL ENCODER % Paul H. Moose % Naval
Time history solution program, L225 (TEV126). Volume 1: Engineering and usage
NASA Technical Reports Server (NTRS)
Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.
1979-01-01
Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?
Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D
2016-01-01
The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.
Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?
Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.
2017-01-01
Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875
Classification of teeth in cone-beam CT using deep convolutional neural network.
Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi
2017-01-01
Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification. Copyright © 2016 Elsevier Ltd. All rights reserved.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
Detection of prostate cancer on multiparametric MRI
NASA Astrophysics Data System (ADS)
Seah, Jarrel C. Y.; Tang, Jennifer S. N.; Kitchen, Andy
2017-03-01
In this manuscript, we describe our approach and methods to the ProstateX challenge, which achieved an overall AUC of 0.84 and the runner-up position. We train a deep convolutional neural network to classify lesions marked on multiparametric MRI of the prostate as clinically significant or not. We implement a novel addition to the standard convolutional architecture described as auto-windowing which is clinically inspired and designed to overcome some of the difficulties faced in MRI interpretation, where high dynamic ranges and low contrast edges may cause difficulty for traditional convolutional neural networks trained on high contrast natural imagery. We demonstrate that this system can be trained end to end and outperforms a similar architecture without such additions. Although a relatively small training set was provided, we use extensive data augmentation to prevent overfitting and transfer learning to improve convergence speed, showing that deep convolutional neural networks can be feasibly trained on small datasets.
No-reference image quality assessment based on statistics of convolution feature maps
NASA Astrophysics Data System (ADS)
Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo
2018-04-01
We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
Local and average structures of BaTiO 3-Bi(Zn 1/2Ti 1/2)O 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Usher, Tedi-Marie; Iamsasri, Thanakorn; Forrester, Jennifer S.
The complex crystallographic structures of (1-x)BaTiO 3-xBi(Zn 1/2Ti 1/2)O 3 (BT-xBZT) are examined using high resolution synchrotron X-ray diffraction, neutron diffraction, and neutron pair distribution function (PDF) analyses. The short-range structures are characterized from the PDFs, and a combined analysis of the X-ray and neutron diffraction patterns is used to determine the long-range structures. Our results demonstrate that the structure appears different when averaged over different length scales. In all compositions, the local structures determined from the PDFs show local tetragonal distortions (i.e., c/a > 1). But, a box-car fitting analysis of the PDFs reveals variations at different length scales.more » For 0.80BT-0.20BZT and 0.90BT-0.10BZT, the tetragonal distortions decrease at longer atom-atom distances (e.g., 30 vs. 5 ). When the longest distances are evaluated (r > 40 ), the lattice parameters approach cubic. Neutron and X-ray diffraction yield further information about the long-range structure. Compositions 0.80BT-0.20BZT and 0.90BT-0.10BZT appear cubic by Bragg diffraction (no peak splitting), consistent with the PDFs at long distances. However, these patterns cannot be adequately fit using a cubic lattice model; modeling their structures with the P4mm space group allows for a better fit to the patterns because the space group allows for c-axis atomic displacements that occur at the local scale. Furthermore, for the compositions 0.92BT-0.08BZT and 0.94BT-0.06BZT, strong tetragonal distortions are observed at the local scale and a less-distorted tetragonal structure is observed at longer length scales. In Rietveld refinements, the latter is modeled using a tetragonal phase. Since the peak overlap in these two-phase compositions limits the ability to model the local-scale structures as tetragonal, it is approximated in the refinements as a cubic phase. These results demonstrate that alloying BT with BZT results in increased disorder and disrupts the long-range ferroelectric symmetry present in BT, while the large tetragonal distortion present in BZT persists at the local scale.« less
Local and average structures of BaTiO 3-Bi(Zn 1/2Ti 1/2)O 3
Usher, Tedi-Marie; Iamsasri, Thanakorn; Forrester, Jennifer S.; ...
2016-11-11
The complex crystallographic structures of (1-x)BaTiO 3-xBi(Zn 1/2Ti 1/2)O 3 (BT-xBZT) are examined using high resolution synchrotron X-ray diffraction, neutron diffraction, and neutron pair distribution function (PDF) analyses. The short-range structures are characterized from the PDFs, and a combined analysis of the X-ray and neutron diffraction patterns is used to determine the long-range structures. Our results demonstrate that the structure appears different when averaged over different length scales. In all compositions, the local structures determined from the PDFs show local tetragonal distortions (i.e., c/a > 1). But, a box-car fitting analysis of the PDFs reveals variations at different length scales.more » For 0.80BT-0.20BZT and 0.90BT-0.10BZT, the tetragonal distortions decrease at longer atom-atom distances (e.g., 30 vs. 5 ). When the longest distances are evaluated (r > 40 ), the lattice parameters approach cubic. Neutron and X-ray diffraction yield further information about the long-range structure. Compositions 0.80BT-0.20BZT and 0.90BT-0.10BZT appear cubic by Bragg diffraction (no peak splitting), consistent with the PDFs at long distances. However, these patterns cannot be adequately fit using a cubic lattice model; modeling their structures with the P4mm space group allows for a better fit to the patterns because the space group allows for c-axis atomic displacements that occur at the local scale. Furthermore, for the compositions 0.92BT-0.08BZT and 0.94BT-0.06BZT, strong tetragonal distortions are observed at the local scale and a less-distorted tetragonal structure is observed at longer length scales. In Rietveld refinements, the latter is modeled using a tetragonal phase. Since the peak overlap in these two-phase compositions limits the ability to model the local-scale structures as tetragonal, it is approximated in the refinements as a cubic phase. These results demonstrate that alloying BT with BZT results in increased disorder and disrupts the long-range ferroelectric symmetry present in BT, while the large tetragonal distortion present in BZT persists at the local scale.« less
Ground-water, surface-water, and water-chemistry data, Black Mesa area, northeastern Arizona, 1996
Littin, Gregory R.; Monroe, Stephen A.
1997-01-01
The Black Mesa monitoring program is designed to document long-term effects of ground-water pumping from the N aquifer by industrial and municipal users. The N aquifer is the major source of water in the 5,400-square-mile Black Mesa area, and the ground water occurs under confined and unconfined conditions. Monitoring activities include continuous and periodic measurements of (1) ground-water pumpage from the confined and unconfined parts of the aquifer, (2) ground-water levels in the confined and unconfined areas of the aquifer, (3) surface-water discharge, and (4) chemistry of the ground water and surface water. In 1996, ground-water withdrawals for industrial and municipal use totaled about 7,040 acre-feet, which is less than a 1-percent decrease from 1995. Pumpage from the confined part of the aquifer decreased by about 3 percent to 5,390 acre-feet, and pumpage from the unconfined part of the aquifer increased by about 9 percent to 1,650 acre-feet. Water-level declines in the confined area during 1996 were recorded in 11 of 13 wells, and the median change was a decline of about 2.7 feet as opposed to a decline of 1.8 feet for 1995. Water-level declines in the unconfined area were recorded in 11 of 18 wells, and the median change was a decline of 0.5 foot in 1996 as opposed to a decline of 0.1 foot in 1995. The average low-flow discharge at the Moenkopi streamflow-gaging station was 2.3 cubic feet per second in 1996. Streamflow-discharge measurements also were made at Laguna Creek, Dinnebito Wash, and Polacca Wash during 1996. Average low-flow discharge was 2.3 cubic feet per second at Laguna Creek, 0.4 cubic foot per second at Dinnebito Wash, and 0.2 cubic foot per second at Polacca Wash. Discharge was measured at three springs. Discharge from Moenkopi School Spring decreased by about 2 gallons per minute from the measurement in 1995. Discharge from an unnamed spring near Dennehotso decreased by 1.3 gallons per minute from the measurement made in 1995; however, discharge increased slightly at Burro Spring. Regionally, long-term water-chemistry data for wells and springs have remained stable.
NASA Astrophysics Data System (ADS)
Balachandran, Prasanna V.; Emery, Antoine A.; Gubernatis, James E.; Lookman, Turab; Wolverton, Chris; Zunger, Alex
2018-04-01
We apply machine learning (ML) methods to a database of 390 experimentally reported A B O3 compounds to construct two statistical models that predict possible new perovskite materials and possible new cubic perovskites. The first ML model classified the 390 compounds into 254 perovskites and 136 that are not perovskites with a 90% average cross-validation (CV) accuracy; the second ML model further classified the perovskites into 22 known cubic perovskites and 232 known noncubic perovskites with a 94% average CV accuracy. We find that the most effective chemical descriptors affecting our classification include largely geometric constructs such as the A and B Shannon ionic radii, the tolerance and octahedral factors, the A -O and B -O bond length, and the A and B Villars' Mendeleev numbers. We then construct an additional list of 625 A B O3 compounds assembled from charge conserving combinations of A and B atoms absent from our list of known compounds. Then, using the two ML models constructed on the known compounds, we predict that 235 of the 625 exist in a perovskite structure with a confidence greater than 50% and among them that 20 exist in the cubic structure (albeit, the latter with only ˜50 % confidence). We find that the new perovskites are most likely to occur when the A and B atoms are a lanthanide or actinide, when the A atom is an alkali, alkali earth, or late transition metal atom, or when the B atom is a p -block atom. We also compare the ML findings with the density functional theory calculations and convex hull analyses in the Open Quantum Materials Database (OQMD), which predicts the T =0 K ground-state stability of all the A B O3 compounds. We find that OQMD predicts 186 of 254 of the perovskites in the experimental database to be thermodynamically stable within 100 meV/atom of the convex hull and predicts 87 of the 235 ML-predicted perovskite compounds to be thermodynamically stable within 100 meV/atom of the convex hull, including 6 of these to be in cubic structures. We suggest these 87 as the most promising candidates for future experimental synthesis of novel perovskites.
A review of historical exposures to asbestos among skilled craftsmen (1940-2006).
Williams, Pamela R D; Phelka, Amanda D; Paustenbach, Dennis J
2007-01-01
This article provides a review and synthesis of the published and selected unpublished literature on historical asbestos exposures among skilled craftsmen in various nonshipyard and shipyard settings. The specific crafts evaluated were insulators, pipefitters, boilermakers, masons, welders, sheet-metal workers, millwrights, electricians, carpenters, painters, laborers, maintenance workers, and abatement workers. Over 50 documents were identified and summarized. Sufficient information was available to quantitatively characterize historical asbestos exposures for the most highly exposed workers (insulators), even though data were lacking for some job tasks or time periods. Average airborne fiber concentrations collected for the duration of the task and/or the entire work shift were found to range from about 2 to 10 fibers per cubic centimeter (cm3 or cc) during activities performed by insulators in various nonshipyard settings from the late 1960s and early 1970s. Higher exposure levels were observed for this craft during the 1940s to 1950s, when dust counts were converted from millions of particles per cubic foot (mppcf) to units of fibers per cubic centimeter (fibers/cc) using a 1:6 conversion factor. Similar tasks performed in U.S. shipyards yielded average fiber concentrations about two-fold greater, likely due to inadequate ventilation and confined work environments; however, excessively high exposure levels were reported in some British Naval shipyards due to the spraying of asbestos. Improved industrial hygiene practices initiated in the early to mid-1970s were found to reduce average fiber concentrations for insulator tasks approximately two- to five-fold. For most other crafts, average fiber concentrations were found to typically range from <0.01 to 1 fibers/cc (depending on the task or time period), with higher concentrations observed during the use of powered tools, the mixing or sanding of drywall cement, and the cleanup of asbestos insulation or lagging materials. The available evidence suggests that although many historical measurements exceeded the current OSHA 8-h time-weighted average (TWA) permissible exposure limit (PEL) of 0.1 fibers/cc, average fiber concentrations generally did not exceed historical occupational exposure limits in place at the time, except perhaps during ripout activities or the spraying of asbestos in enclosed spaces or onboard ships. Additionally, reported fiber concentrations may not have represented daily or actual human exposures to asbestos, since few samples were collected beyond specific short-term tasks and workers sometimes wore respiratory protective equipment. The available data were not sufficient to determine whether the airborne fiber concentrations represented serpentine or amphibole asbestos fibers, which would have a pronounced impact on the potential health hazards posed by the asbestos. Despite a number of limitations associated with the available air sampling data, the information should provide guidance for reconstructing asbestos exposures for different crafts in specific occupational settings where asbestos was present during the 1940 to 2006 time period.
Esophagus segmentation in CT via 3D fully convolutional neural network and random walk.
Fechter, Tobias; Adebahr, Sonja; Baltas, Dimos; Ben Ayed, Ismail; Desrosiers, Christian; Dolz, Jose
2017-12-01
Precise delineation of organs at risk is a crucial task in radiotherapy treatment planning for delivering high doses to the tumor while sparing healthy tissues. In recent years, automated segmentation methods have shown an increasingly high performance for the delineation of various anatomical structures. However, this task remains challenging for organs like the esophagus, which have a versatile shape and poor contrast to neighboring tissues. For human experts, segmenting the esophagus from CT images is a time-consuming and error-prone process. To tackle these issues, we propose a random walker approach driven by a 3D fully convolutional neural network (CNN) to automatically segment the esophagus from CT images. First, a soft probability map is generated by the CNN. Then, an active contour model (ACM) is fitted to the CNN soft probability map to get a first estimation of the esophagus location. The outputs of the CNN and ACM are then used in conjunction with a probability model based on CT Hounsfield (HU) values to drive the random walker. Training and evaluation were done on 50 CTs from two different datasets, with clinically used peer-reviewed esophagus contours. Results were assessed regarding spatial overlap and shape similarity. The esophagus contours generated by the proposed algorithm showed a mean Dice coefficient of 0.76 ± 0.11, an average symmetric square distance of 1.36 ± 0.90 mm, and an average Hausdorff distance of 11.68 ± 6.80, compared to the reference contours. These results translate to a very good agreement with reference contours and an increase in accuracy compared to existing methods. Furthermore, when considering the results reported in the literature for the publicly available Synapse dataset, our method outperformed all existing approaches, which suggests that the proposed method represents the current state-of-the-art for automatic esophagus segmentation. We show that a CNN can yield accurate estimations of esophagus location, and that the results of this model can be refined by a random walk step taking pixel intensities and neighborhood relationships into account. One of the main advantages of our network over previous methods is that it performs 3D convolutions, thus fully exploiting the 3D spatial context and performing an efficient volume-wise prediction. The whole segmentation process is fully automatic and yields esophagus delineations in very good agreement with the gold standard, showing that it can compete with previously published methods. © 2017 American Association of Physicists in Medicine.
Floods of May 30 to June 15, 2008, in the Iowa and Cedar River basins, eastern Iowa
Linhart, Mike S.; Eash, David A.
2010-01-01
As a result of prolonged and intense periods of rainfall in late May and early June, 2008, along with heavier than normal snowpack the previous winter, record flooding occurred in Iowa in the Iowa River and Cedar River Basins. The storms were part of an exceptionally wet period from May 29 through June 12, when an Iowa statewide average of 9.03 inches of rain fell; the normal statewide average for the same period is 2.45 inches. From May 29 to June 13, the 16-day rainfall totals recorded at rain gages in Iowa Falls and Clutier were 14.00 and 13.83 inches, respectively. Within the Iowa River Basin, peak discharges of 51,000 cubic feet per second (flood-probability estimate of 0.2 to 1 percent) at the 05453100 Iowa River at Marengo, Iowa streamflow-gaging station (streamgage) on June 12, and of 39,900 cubic feet per second (flood-probability estimate of 0.2 to 1 percent) at the 05453520 Iowa River below Coralville Dam near Coralville, Iowa streamgage on June 15 are the largest floods on record for those sites. A peak discharge of 41,100 cubic feet per second (flood-probability estimate of 0.2 to 1 percent) on June 15 at the 05454500 Iowa River at Iowa City, Iowa streamgage is the fourth highest on record, but is the largest flood since regulation by the Coralville Dam began in 1958. Within the Cedar River Basin, the May 30 to June 15, 2008, flood is the largest on record at all six streamgages in Iowa located on the mainstem of the Cedar River and at five streamgages located on the major tributaries. Flood-probability estimates for 10 of these 11 streamgages are less than 1 percent. Peak discharges of 112,000 cubic feet per second (flood-probability estimate of 0.2 to 1 percent) at the 05464000 Cedar River at Waterloo, Iowa streamgage on June 11 and of 140,000 cubic feet per second (flood-probability estimate of less than 0.2 percent) at the 05464500 Cedar River at Cedar Rapids, Iowa streamgage on June 13 are the largest floods on record for those sites. Downstream from the confluence of the Iowa and Cedar Rivers, the peak discharge of 188,000 cubic feet per second (flood-probability estimate of less than 0.2 percent) at the 05465500 Iowa River at Wapello, Iowa streamgage on June 14, 2008, is the largest flood on record in the Iowa River and Cedar River Basins since 1903. High-water marks were measured at 88 locations along the Iowa River between State Highway 99 near Oakville and U.S. Highway 69 in Belmond, a distance of 319 river miles. High-water marks were measured at 127 locations along the Cedar River between Fredonia near the mouth (confluence with the Iowa River) and Riverview Drive north of Charles City, a distance of 236 river miles. The high-water marks were used to develop flood profiles for the Iowa and Cedar River.
NASA Astrophysics Data System (ADS)
Liu, Wanjun; Liang, Xuejian; Qu, Haicheng
2017-11-01
Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.
Geomorphology and river dynamics of the lower Copper River, Alaska
Brabets, Timothy P.; Conaway, Jeffrey S.
2009-01-01
Located in south-central Alaska, the Copper River drains an area of more than 24,000 square miles. The average annual flow of the river near its mouth is 63,600 cubic feet per second, but is highly variable between winter and summer. In the winter, flow averages approximately 11,700 cubic feet per second, and in the summer, due to snowmelt, rainfall, and glacial melt, flow averages approximately 113,000 cubic feet per second, an order of magnitude higher. About 15 miles upstream of its mouth, the Copper River flows past the face of Childs Glacier and enters a large, broad, delta. The Copper River Highway traverses this flood plain, and in 2008, 11 bridges were located along this section of the highway. The bridges cross several parts of the Copper River and in recent years, the changing course of the river has seriously damaged some of the bridges.Analysis of aerial photography from 1991, 1996, 2002, 2006, and 2007 indicates the eastward migration of a channel of the Copper River that has resulted in damage to the Copper River Highway near Mile 43.5. Migration of another channel in the flood plain has resulted in damage to the approach of Bridge 339. As a verification of channel change, flow measurements were made at bridges along the Copper River Highway in 2005–07. Analysis of the flow measurements indicate that the total flow of the Copper River has shifted from approximately 50 percent passing through the bridges at Mile 27, near the western edge of the flood plain, and 50 percent passing through the bridges at Mile 36–37 to approximately 5 percent passing through the bridges at Mile 27 and 95 percent through the bridges at Mile 36–37 during average flow periods.The U.S. Geological Survey’s Multi-Dimensional Surface-Water Modeling System was used to simulate water-surface elevation and velocity, and to compute bed shear stress at two areas where the Copper River is affecting the Copper River Highway. After calibration, the model was used to examine the effects that betterments, such as guide banks or bridge extensions, would have on flow conditions and to provide sound conceptual information that could help decide if a proposed betterment will work or determine potential problems that need to be addressed for a particular betterment. The ability of the model to simulate these hydraulic conditions was constrained by the accuracy and level of channel geometry detail, which is constantly changing in the lower Copper River.
Spatial averaging of a dissipative particle dynamics model for active suspensions
NASA Astrophysics Data System (ADS)
Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot
2018-03-01
Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.
Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyeong; Kwon, Ohin; Seo, Jin Keun; Baek, Woon Sik
2003-05-01
In magnetic resonance electrical impedance tomography (MREIT) we inject currents through electrodes placed on the surface of a subject and try to reconstruct cross-sectional resistivity (or conductivity) images using internal magnetic flux density as well as boundary voltage measurements. In this paper we present a static resistivity image of a cubic saline phantom (50 x 50 x 50 mm3) containing a cylindrical sausage object with an average resistivity value of 123.7 ohms cm. Our current MREIT system is based on an experimental 0.3 T MRI scanner and a current injection apparatus. We captured MR phase images of the phantom while injecting currents of 28 mA through two pairs of surface electrodes. We computed current density images from magnetic flux density images that are proportional to the MR phase images. From the current density images and boundary voltage data we reconstructed a cross-sectional resistivity image within a central region of 38.5 x 38.5 mm2 at the middle of the phantom using the J-substitution algorithm. The spatial resolution of the reconstructed image was 64 x 64 and the reconstructed average resistivity of the sausage was 117.7 ohms cm. Even though the error in the reconstructed average resistivity value was small, the relative L2-error of the reconstructed image was 25.5% due to the noise in measured MR phase images. We expect improvements in the accuracy by utilizing an MRI scanner with higher SNR and increasing the size of voxels scarifying the spatial resolution.
Water in the Middle East: A Catalyst for Conflict or Foundation for Cooperation
2003-04-07
of the Jordan River basin and the adjacent aquifers . “While freshwater resources are renewable, in practice they are often finite, unevenly...tributaries, and adjacent aquifers . Though small in comparison to the Nile, Tigris or Euphrates, (the average discharge rates in billions of cubic...and the Yarmouk River), and the non-renewable ground water aquifers ; those beneath the West Bank (the Yarkon-Taninim aquifer ), and the aquifer of the
Assessment of the SMAP Passive Soil Moisture Product
NASA Technical Reports Server (NTRS)
Chan, Steven K.; Bindlish, Rajat; O'Neill, Peggy E.; Njoku, Eni; Jackson, Tom; Colliander, Andreas; Chen, Fan; Burgin, Mariko; Dunbar, Scott; Piepmeier, Jeffrey;
2016-01-01
The National Aeronautics and Space Administration (NASA) Soil Moisture Active Passive (SMAP) satellite mission was launched on January 31, 2015. The observatory was developed to provide global mapping of high-resolution soil moisture and freeze-thaw state every two to three days using an L-band (active) radar and an L-band (passive) radiometer. After an irrecoverable hardware failure of the radar on July 7, 2015, the radiometer-only soil moisture product became the only operational Level 2 soil moisture product for SMAP. The product provides soil moisture estimates posted on a 36 kilometer Earth-fixed grid produced using brightness temperature observations from descending passes. Within months after the commissioning of the SMAP radiometer, the product was assessed to have attained preliminary (beta) science quality, and data were released to the public for evaluation in September 2015. The product is available from the NASA Distributed Active Archive Center at the National Snow and Ice Data Center. This paper provides a summary of the Level 2 Passive Soil Moisture Product (L2_SM_P) and its validation against in situ ground measurements collected from different data sources. Initial in situ comparisons conducted between March 31, 2015 and October 26, 2015, at a limited number of core validation sites (CVSs) and several hundred sparse network points, indicate that the V-pol Single Channel Algorithm (SCA-V) currently delivers the best performance among algorithms considered for L2_SM_P, based on several metrics. The accuracy of the soil moisture retrievals averaged over the CVSs was 0.038 cubic meter per cubic meter unbiased root-mean-square difference (ubRMSD), which approaches the SMAP mission requirement of 0.040 cubic meter per cubic meter.
NASA Technical Reports Server (NTRS)
Chuang, Hsiao-Chi; Hsiao, Ta-Chih; Wang, Sheng-Hsiang; Tsay, Si-Chee; Lin, Neng-Huei
2016-01-01
Biomass burning (BB) frequently occurs in SouthEast Asia (SEA), which significantly affects the air quality and could consequently lead to adverse health effects. The aim of this study was to characterize particulate matter (PM) and black carbon (BC) emitted from BB source regions in SEA and their potential of deposition in the alveolar region of human lungs. A 31-day characterization of PM profiling was conducted at the Doi Ang Khang (DAK) meteorology station in northern Thailand in March 2013. Substantial numbers of PM (10147 +/- 5800 # per cubic centimeter) with a geometric mean diameter (GMD) of 114.4 +/- 9.2 nm were found at the study site. The PM of less than 2.5 micron in aerodynamic diameter (PM sub 2.5) hourly-average mass concentration was 78.0 +/- 34.5 per cubic microgram whereas the black carbon (BC) mass concentration was 4.4 +/- 2.6 micrograms per cubic meter. Notably, high concentrations of nanoparticle surface area (100.5 +/- 54.6 square micrometers per cubic centimeter) emitted from biomass burning can be inhaled into the human alveolar region. Significant correlations with fire counts within different ranges around DAK were found for particle number, the surface area concentration of alveolar deposition, and BC. In conclusion, biomass burning is an important PM source in SEA, particularly nanoparticles, which has high potency to be inhaled into the lung environment and interact with alveolar cells, leading to adverse respiratory effects. The fire counts within 100 to 150 km shows the highest Pearson's r for particle number and surface area concentration. It suggests 12 to 24 hr could be a fair time scale for initial aging process of BB aerosols. Importantly, the people lives in this region could have higher risk for PM exposure.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
A unitary convolution approximation for the impact-parameter dependent electronic energy loss
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
1999-06-01
In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.
1980-01-01
A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
NASA Astrophysics Data System (ADS)
Wu, Leyuan
2018-01-01
We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.
A convolutional neural network to filter artifacts in spectroscopic MRI.
Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D
2018-03-09
Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.
Baczewski, Andrew David; Vikram, Melapudi; Shanker, Balasubramaniam; ...
2010-08-27
Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(N s 2N t 2), where N s and N t are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The firstmore » scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(N sN tlog 2N t). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.« less
Spokane Valley-Rathdrum Prairie aquifer, Washington and Idaho
Drost, B.W.; Seitz, Harold R.
1977-01-01
The Spokane Valley-Rathdrum Prairie aquifer is composed of unconsolidated Quaternary glaciofluvial deposits underlying an area of about 350 square miles. Transmissivities in the aquifer range from about 0.13 million to 11 million feet squared per day and ground-water velocities exceed 60 feet per day in some areas. The water-table gradient ranges from about 2 feet per mile to more than 60 feet per mile, and during a year the water table fluctuates on the order of 5 to 10 feet. For most of the aquifer the water table is between 40 and 400 feet below land surface. The aquifer is recharged and discharged at an average rate of about 1,320 cubic feet per second. Water is presently (1976) pumped from the aquifer at an average rate of about 239 cubic feet per second for domestic, industrial, and agricultural uses. Most of this is discharged to the Spokane River, lost to evapotranspiration, or applied to the land surface with little or no change in quality. However, about 34 cubic feet per second becomes waste water generated by domestic and industrial activities and is returned to the aquifer by percolation from cesspools and drain fields. The quality of water in the aquifer is generally good. Less than one-half of 1 percent of the 3,300 analyses available exceeded the maximum contaminant levels specified in the National Interim Primary (or Proposed Secondary) Drinking Water Regulations (U.S. Environmental Protection Agency, 1975) for constituents which may be hazardous to health. Of the 6,300 analyses for constituents considered detrimental to the esthetic quality of water, about 1.4 percent have yielded values which exceeded the recommended levels. Alternative water sources for the area supplied by the aquifer are the Spokane and Little Spokane Rivers, lakes adjacent to the aquifer, and other aquifers. All of these potential sources are less desirable than the Spokane Valley-Rathdrum Prairie aquifer because of insufficient supplies, poor water quality, and (or) remoteness from the areas of need.
Enhanced line integral convolution with flow feature detection
DOT National Transportation Integrated Search
1995-01-01
Prepared ca. 1995. The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain [Cabral & Leedom '93]. The method produces a flow texture imag...
Hydrology and simulation of ground-water flow in the Aguadilla to Rio Camuy area, Puerto Rico
Tucci, Patrick; Martinez, M.I.
1995-01-01
The aquifers of the Aguadilla to Rio Camuy area, in the northwestern part of Puerto Rico, are the least developed of those on the north coast, and relatively little information is available concerning the ground-water system. The present study, which was part of a comprehensive appraisal of the ground-water resources of the North Coast Province, attempts to interpret the hydrology of the area within the constraints of available data. The study area consists of an uplifted rolling plain that is 200 to 400 feet above sea level and a heavily forested, karst upland. The only major streams in the area are the Rfo Camuy and the Rio Guajataca. Most water used in the area is obtained from Lago de Guajataca, just south of the study area, and ground-water use is minimal (less than 5 million gallons per day). Sedimentary rocks of Tertiary age, mainly limestone and calcareous clays, comprise the aquifers of the Aguadilla to Rio Camuy area. The rocks generally dip from 4 to 7 degrees to the north, and the total sedimentary rock sequence may be as much as 6,000 feet thick near the Atlantic coast. Baseflows for the Rio Camuy are 58 cubic feet per second near Bayaney and 72 cubic feet per second near Hatillo. The ground-water discharge to the Rio Camuy between these stations is estimated to be 15 cubic feet per second, or 2.6 cubic feet per second per linear mile. The flow of the Rio Guajataca is regulated by the Guajataca Dam at Lago de Guajataca. Ground-water discharge to the Rio Guajataca between the dam and the coast is estimated to be about 17 cubic feet per.second, based on the average ground-water discharge per linear mile estimated for the Rio Camuy. Both water-table and artesian aquifers are present in the Aguadilla to Rio Camuy area; how-ever, most ground water occurs within the watertable aquifer, which was the primary focus of this study. The top of the confining unit, below the water-table aquifer, generally is within the unnamed upper member of the Cibao Formation; however, it is within the Los Puertos Formation in the eastern part of the study area. The water-table aquifer primarily is composed of rocks of the Aymam6n Limestone and the Los Puertos Formation. The estimated saturated thickness of the water-table aquifer ranges from zero at the southern limit of the aquifer to more than 600 feet south of Isabela. Hydraulic conductivity of the Aymam6n Limestone, based on specific-capacity test data for seven wells, ranges from about 1 to about 25 feet per day and averages 7.5 feet per day. Hydraulic conductivity of the Los Puertos Formation, based on specific-capacity test data for four wells, generally was less than 7 feet. per day. The average hydraulic-conductivity value for both the Aymam6n Limestone and the Los Puertos Formation, based on specific-capacity test data, is estimated to be about 6.0 feet per day. These hydraulic-conductivity values are much less than average values for the water-table aquifer reported for other parts of the North Coast Province. Transmissivity values, based on the average hydraulic-conductivity value for the aquifer derived from specific-capacity tests, range from zero to about 4,000 feet squared per day; however, these values were adjusted upward during model calibration. Ground water generally moves from the highlands in the south toward the sea to the north and west, and locally, to streams. A major groundwater divide extends from the southeastern corner of the study area to the northwest, and separates flow north and east into the study area from flow to the southwest toward the Rio Culebrinas. Nearly all recharge to the aquifer is from infiltration of rainfall into the karst uplands. Discharge from the aquifer primarily occurs as leakage to streams and to the sea, and to a lesser degree as flow to wells. A two-layer, three-dimensional, steady-state, numerical model was constructed to simulateground-water flow in the water-table aquifer between Aguadilla and the R/o Camuy area. A basic a
Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2012-01-01
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097
Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2012-01-01
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.
The decoding of majority-multiplexed signals by means of dyadic convolution
NASA Astrophysics Data System (ADS)
Losev, V. V.
1980-09-01
The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
2014-01-01
This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.
[Application of numerical convolution in in vivo/in vitro correlation research].
Yue, Peng
2009-01-01
This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.
DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.
Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh
2017-09-01
Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.
Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.
Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol
2018-01-01
Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.
Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network
2018-01-01
Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.
NASA Astrophysics Data System (ADS)
Liu, Miaofeng
2017-07-01
In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
Stability of deep features across CT scanners and field of view using a physical phantom
NASA Astrophysics Data System (ADS)
Paul, Rahul; Shafiq-ul-Hassan, Muhammad; Moros, Eduardo G.; Gillies, Robert J.; Hall, Lawrence O.; Goldgof, Dmitry B.
2018-02-01
Radiomics is the process of analyzing radiological images by extracting quantitative features for monitoring and diagnosis of various cancers. Analyzing images acquired from different medical centers is confounded by many choices in acquisition, reconstruction parameters and differences among device manufacturers. Consequently, scanning the same patient or phantom using various acquisition/reconstruction parameters as well as different scanners may result in different feature values. To further evaluate this issue, in this study, CT images from a physical radiomic phantom were used. Recent studies showed that some quantitative features were dependent on voxel size and that this dependency could be reduced or removed by the appropriate normalization factor. Deep features extracted from a convolutional neural network, may also provide additional features for image analysis. Using a transfer learning approach, we obtained deep features from three convolutional neural networks pre-trained on color camera images. An we examination of the dependency of deep features on image pixel size was done. We found that some deep features were pixel size dependent, and to remove this dependency we proposed two effective normalization approaches. For analyzing the effects of normalization, a threshold has been used based on the calculated standard deviation and average distance from a best fit horizontal line among the features' underlying pixel size before and after normalization. The inter and intra scanner dependency of deep features has also been evaluated.
Rank-based pooling for deep convolutional neural networks.
Shi, Zenglin; Ye, Yangdong; Wu, Yunpeng
2016-11-01
Pooling is a key mechanism in deep convolutional neural networks (CNNs) which helps to achieve translation invariance. Numerous studies, both empirically and theoretically, show that pooling consistently boosts the performance of the CNNs. The conventional pooling methods are operated on activation values. In this work, we alternatively propose rank-based pooling. It is derived from the observations that ranking list is invariant under changes of activation values in a pooling region, and thus rank-based pooling operation may achieve more robust performance. In addition, the reasonable usage of rank can avoid the scale problems encountered by value-based methods. The novel pooling mechanism can be regarded as an instance of weighted pooling where a weighted sum of activations is used to generate the pooling output. This pooling mechanism can also be realized as rank-based average pooling (RAP), rank-based weighted pooling (RWP) and rank-based stochastic pooling (RSP) according to different weighting strategies. As another major contribution, we present a novel criterion to analyze the discriminant ability of various pooling methods, which is heavily under-researched in machine learning and computer vision community. Experimental results on several image benchmarks show that rank-based pooling outperforms the existing pooling methods in classification performance. We further demonstrate better performance on CIFAR datasets by integrating RSP into Network-in-Network. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spectral interpolation - Zero fill or convolution. [image processing
NASA Technical Reports Server (NTRS)
Forman, M. L.
1977-01-01
Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
Langenbucher, Frieder
2003-11-01
Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.
Maisel, Sascha B; Höfler, Michaela; Müller, Stefan
2012-11-29
Any thermodynamically stable or metastable phase corresponds to a local minimum of a potentially very complicated energy landscape. But however complex the crystal might be, this energy landscape is of parabolic shape near its minima. Roughly speaking, the depth of this energy well with respect to some reference level determines the thermodynamic stability of the system, and the steepness of the parabola near its minimum determines the system's elastic properties. Although changing alloying elements and their concentrations in a given material to enhance certain properties dates back to the Bronze Age, the systematic search for desirable properties in metastable atomic configurations at a fixed stoichiometry is a very recent tool in materials design. Here we demonstrate, using first-principles studies of four binary alloy systems, that the elastic properties of face-centred-cubic intermetallic compounds obey certain rules. We reach two conclusions based on calculations on a huge subset of the face-centred-cubic configuration space. First, the stiffness and the heat of formation are negatively correlated with a nearly constant Spearman correlation for all concentrations. Second, the averaged stiffness of metastable configurations at a fixed concentration decays linearly with their distance to the ground-state line (the phase diagram of an alloy at zero Kelvin). We hope that our methods will help to simplify the quest for new materials with optimal properties from the vast configuration space available.
Degradation of blue and red inks by Ag/AgCl photocatalyst under UV light irradiation
NASA Astrophysics Data System (ADS)
Daupor, Hasan; Chenea, Asmat
2017-08-01
Objective of this research, cubic Ag/AgCl photocatalysts with an average particle size of 500 nm has been successfully synthesized via a modified precipitation reaction between ZrCl4 and AgNO3. Method for analysis, the crystal structure of the product was characterized by X-ray powder diffraction (XRD). The morphology and composition were studied by scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR), UV-vis diffuse-reflection spectra (DRS) and so on. The result showed that the optical absorption spectrum exhibited strong absorption in the visible region around 500-600 nm due to surface plasmon resonance (SPR) of metallic silver nanoparticles. SEM micrographs showed that the obtained Ag/AgCl had cubic morphology and appeared on the porous surface as the cubic cage morphology. As a result, this porous surface also positively affected the photocatalytic reaction. The photocatalytic activity of the obtained product was evaluated by the photodegradation of blue and red ink solutions under UV light irradiation, and it was interestingly, discovered that AgCl could degrade 0.25% and 0.10% in 7 hours for blue and red inks solution respectively, Which were higher than of commercial AgCl. The result suggested that the morphology of Ag/AgCl strongly affected their photocatalytic activities. O2-, OH- reaction. radicals and Cl° atom are main species during photocatalytic reaction.
Synthesis and characterization of mesoporous ZnS with narrow size distribution of small pores
NASA Astrophysics Data System (ADS)
Nistor, L. C.; Mateescu, C. D.; Birjega, R.; Nistor, S. V.
2008-08-01
Pure, nanocrystalline cubic ZnS forming a stable mesoporous structure was synthesized at room temperature by a non-toxic surfactant-assisted liquid liquid reaction, in the 9.5 10.5 pH range of values. The appearance of an X-ray diffraction (XRD) peak in the region of very small angles (˜ 2°) reveals the presence of a porous material with a narrow pore size distribution, but with an irregular arrangement of the pores, a so-called worm hole or sponge-like material. The analysis of the wide angle XRD diffractograms shows the building blocks to be ZnS nanocrystals with cubic structure and average diameter of 2 nm. Transmission electron microscopy (TEM) investigations confirm the XRD results; ZnS crystallites of 2.5 nm with cubic (blende) structure are the building blocks of the pore walls with pore sizes from 1.9 to 2.5 nm, and a broader size distribution for samples with smaller pores. Textural measurements (N2 adsorption desorption isotherms) confirm the presence of mesoporous ZnS with a narrow range of small pore sizes. The relatively lower surface area of around 100 m2/g is attributed to some remaining organic molecules, which are filling the smallest pores. Their presence, confirmed by IR spectroscopy, seems to be responsible for the high stability of the resulting mesoporous ZnS as well.
Influence of hot isostatic pressing on ZrO2-CaO dental ceramics properties.
Gionea, Alin; Andronescu, Ecaterina; Voicu, Georgeta; Bleotu, Coralia; Surdu, Vasile-Adrian
2016-08-30
Different hot isostatic pressing conditions were used to obtain zirconia ceramics, in order to assess the influence of HIP on phase transformation, compressive strength, Young's modulus and density. First, CaO stabilized zirconia powder was synthesized through sol-gel method, using zirconium propoxide, calcium isopropoxide and 2-metoxiethanol as precursors, then HIP treatment was applied to obtain final dense ceramics. Ceramics were morphologically and structurally characterized by X-ray diffraction (XRD) and scanning electron microscopy (SEM). Density measurements, compressive strength and Young's modulus tests were also performed in order to evaluate the effect of HIP treatment. The zirconia powders heat treated at 500°C for 2h showed a pure cubic phase with average particle dimension about 70nm. The samples that were hot isostatic pressed presented a mixture of monoclinic-tetragonal or monoclinic-cubic phases, while for pre-sintered samples, cubic zirconia was the single crystalline form. Final dense ceramics were obtained after HIP treatment, with relative density values higher than 94%. ZrO2-CaO ceramics presented high compressive strength, with values in the range of 500-708.9MPa and elastic behavior with Young's modulus between 1739MPa and 4372MPa. Finally zirconia ceramics were tested for biocompatibility allowing the normal development of MG63 cells in vitro. Copyright © 2015 Elsevier B.V. All rights reserved.
Anatomy Of The ‘LuSi’ Mud Eruption, East Java
NASA Astrophysics Data System (ADS)
Tingay, M. R.
2009-12-01
Early in the morning of the 29th of May 2006, hot mud started erupting from the ground in the densely populated Porong District of Sidoarjo, East Java. With initial flow rates of ~5000 cubic meters per day, the mud quickly inundated neighbouring villages. Over two years later and the ‘Lusi’ eruption has increased in strength, expelling over 90 million cubic meters of mud at an average rate of approximately 100000 cubic meters per day. The mud flow has now covered over 700 hectares of land to depths of over 25 meters, engulfing a dozen villages and displacing approximately 40000 people. In addition to the inundated areas, other areas are also at risk from subsidence and distant eruptions of gas. However, efforts to stem the mud flow or monitor its evolution are hampered by our overall lack of knowledge and consensus on the subsurface anatomy of the Lusi mud volcanic system. In particular, the largest and most significant uncertainties are the source of the erupted water (shales versus deep carbonates), the fluid flow pathways (purely fractures versus mixed fracture and wellbore) and disputes over the subsurface geology (nature of deep carbonates, lithology of rocks between shale and carbonates). This study will present and overview of the anatomy of the Lusi mud volcanic system with particular emphasis on these critical uncertainties and their influence on the likely evolution of disaster.
Acral melanoma detection using a convolutional neural network for dermoscopy images.
Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho
2018-01-01
Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
Homogeneity of a Global Multisatellite Soil Moisture Climate Data Record
NASA Technical Reports Server (NTRS)
Su, Chun-Hsu; Ryu, Dongryeol; Dorigo, Wouter; Zwieback, Simon; Gruber, Alexander; Albergel, Clement; Reichle, Rolf H.; Wagner, Wolfgang
2016-01-01
Climate Data Records (CDR) that blend multiple satellite products are invaluable for climate studies, trend analysis and risk assessments. Knowledge of any inhomogeneities in the CDR is therefore critical for making correct inferences. This work proposes a methodology to identify the spatiotemporal extent of the inhomogeneities in a 36-year, global multisatellite soil moisture CDR as the result of changing observing systems. Inhomogeneities are detected at up to 24 percent of the tested pixels with spatial extent varying with satellite changeover times. Nevertheless, the contiguous periods without inhomogeneities at changeover times are generally longer than 10 years. Although the inhomogeneities have measurable impact on the derived trends, these trends are similar to those observed in ground data and land surface reanalysis, with an average error less than 0.003 cubic meters per cubic meter per year. These results strengthen the basis of using the product for long-term studies and demonstrate the necessity of homogeneity testing of multisatellite CDRs in general.
NASA Astrophysics Data System (ADS)
Zhu, Y.; Liu, T.; Zhang, X. Y.; Pan, Y. F.; Wei, X. Y.; Ma, C. L.; Shi, D. N.; Fan, J. Y.
2017-04-01
In this paper, we elucidate the mechanism for Li co-dopant induced enhancement of the ferromagnetism in 2 × 2 × 2 and 3 × 3 × 3 cubic (Zn, Mn)Se using density functional calculations. The doping atoms tend to congregate together according to the ferromagnetic (FM) energy. All configurations are strongly FM ones due to double exchange (DE) and p-d exchange (PE). DE and PE are shown in the partial density of states. The hole is uniformly distributed in the cubic (Zn, Mn, Li)Se, and it is the one and only parameter to decide the exchange energy, when impurity atoms stay further away from each other. The average exchange energy of these configurations is considered to be a function of the square root of the hole concentration. The fitting data to a polynomial function shows that DE and PE have roles of similar importance in the exchange energy.
Nanotwin and phase transformation in tetragonal Pb(Fe1/2Nb1/2)1-xTixO3 single crystal
NASA Astrophysics Data System (ADS)
Tu, C.-S.; Tseng, C.-T.; Chien, R. R.; Schmidt, V. Hugo; Hsieh, C.-M.
2008-09-01
This work is a study of phase transformation in (001)-cut Pb(Fe1/2Nb1/2)1-xTixO3 (x =48%) single crystals by means of dielectric permittivity, domain structure, and in situ x-ray diffraction. A first-order T(TNT)-C(TNT) phase transition was observed at the Curie temperature TC≅518 K upon zero-field heating. T, TNT, and C are tetragonal, tetragonal nanotwin, and cubic phases, respectively. T(TNT) and C(TNT) indicate that minor TNT domains reside in the T and C matrices. Nanotwins, which can cause broad diffraction peak, remain above TC≅518 K and give an average microscopic cubic symmetry in the polarizing microscopy. Colossal dielectric permittivity (>104) was observed above room temperature with strong frequency dispersion. This study suggests that nanotwins can play an important role in relaxor ferroelectric crystals while phase transition takes place. The Fe ion is a potential candidate as a B-site dopant for enhancing dielectric permittivity.
Howard, H T; Tyler, G L; Esposito, P B; Anderson, J D; Reasenberg, R D; Shapiro, I I; Fjeldbo, G; Kliore, A J; Levy, G S; Brunn, D L; Dickinson, R; Edelson, R E; Martin, W L; Postal, R B; Seidel, B; Sesplaukis, T T; Shirley, D L; Stelzried, C T; Sweetnam, D N; Wood, G E; Zygielbaum, A I
1974-07-12
Analysis of the radio-tracking data from Mariner 10 yields 6,023,600 +/- 600 for the ratio of the mass of the sun to that of Mercury, in very good agreement with values determined earlier from radar data alone. Occultation measurements yielded values for the radius of Mercury of 2440 +/- 2 and 2438 +/- 2 kilometers at laditudes of 2 degrees N and 68 degrees N, respectively, again in close agreement with the average equatorial radius of 2439 +/- 1 kilometers determined from radar data. The mean density of 5.44 grams per cubic centimeter deduced for Mercury from Mariner 10 data thus virtually coincides with the prior determination. No evidence of either an ionosphere or an atmosphere was found, with the data yielding upper bounds on the electron density of about 1500 and 4000 electrons per cubic centimeter on the dayside and nightside, respectively, and an inferred upper bound on the surface pressure of 10(-8) millibar.
Traveltimes of flood waves on the New River between Hinton and Hawks Nest, West Virginia
Appel, David H.
1983-01-01
The New River Gorge National River's [a 51-mile segment of the New River between Hinton and Fayette (an abandoned community), W. Va. main attraction is a combination of scenic wilderness, fishing, cultural resources, and whitewater boating. However, recreational quality, safety, and use of the river depends in part upon the amount and fluctuations in streamflow, manmade and natural. During 1981 and 1982, the U.S. Geological Survey found that the flood wave travels at an average speed of 6.8 miles per hour when streamflow is 15,000 cubic feet per second and 3.5 miles per hour when streamflow is 2,200 cubic feet per second. Curves have been developed to estimate traveltimes between any two points within the National River jurisdiction. The gaging station at Thurmond, installed as part of this study, can be called by telephone, (304) 465-0493, to determine river stage. The river stage can be converted to streamflow and traveltimes.
NASA Astrophysics Data System (ADS)
Alexander, LYSENKO; Iurii, VOLK
2018-03-01
We developed a cubic non-linear theory describing the dynamics of the multiharmonic space-charge wave (SCW), with harmonics frequencies smaller than the two-stream instability critical frequency, with different relativistic electron beam (REB) parameters. The self-consistent differential equation system for multiharmonic SCW harmonic amplitudes was elaborated in a cubic non-linear approximation. This system considers plural three-wave parametric resonant interactions between wave harmonics and the two-stream instability effect. Different REB parameters such as the input angle with respect to focusing magnetic field, the average relativistic factor value, difference of partial relativistic factors, and plasma frequency of partial beams were investigated regarding their influence on the frequency spectrum width and multiharmonic SCW saturation levels. We suggested ways in which the multiharmonic SCW frequency spectrum widths could be increased in order to use them in multiharmonic two-stream superheterodyne free-electron lasers, with the main purpose of forming a powerful multiharmonic electromagnetic wave.
Multidomain Skyrmion Lattice State in Cu2OSeO3.
Zhang, S L; Bauer, A; Burn, D M; Milde, P; Neuber, E; Eng, L M; Berger, H; Pfleiderer, C; van der Laan, G; Hesjedal, T
2016-05-11
Magnetic skyrmions in chiral magnets are nanoscale, topologically protected magnetization swirls that are promising candidates for spintronics memory carriers. Therefore, observing and manipulating the skyrmion state on the surface level of the materials are of great importance for future applications. Here, we report a controlled way of creating a multidomain skyrmion state near the surface of a Cu2OSeO3 single crystal, observed by soft resonant elastic X-ray scattering. This technique is an ideal tool to probe the magnetic order at the L3 edge of 3d metal compounds giving an average depth sensitivity of ∼50 nm. The single-domain 6-fold-symmetric skyrmion lattice can be broken up into domains, overcoming the propagation directions imposed by the cubic anisotropy by applying the magnetic field in directions deviating from the major cubic axes. Our findings open the door to a new way to manipulate and engineer the skyrmion state locally on the surface or on the level of individual skyrmions, which will enable applications in the future.
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
An Interactive Graphics Program for Assistance in Learning Convolution.
ERIC Educational Resources Information Center
Frederick, Dean K.; Waag, Gary L.
1980-01-01
A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…
Recharge of valley-fill aquifers in the glaciated northeast from upland runoff
Williams, J.H.; Morrissey, D.J.
1996-01-01
Channeled and unchanneled runoff from till-covered bedrock uplands is a major source of recharge to valley-fill aquifers in the glaciated northeastern United States. Streamflow measurements and model simulation of average steady-state conditions indicate that upland runoff accounted for more recharge to two valley-fill aquifers in moderately high topographic-relief settings than did direct infiltration of precipitation. Recharge from upland runoff to a modeled valley-fill aquifer in an area of lower relief was significant but less than that from direct infiltration of precipitation. The amount of upland runoff available for recharging valley-fill aquifers in the glaciated Northeast ranges from about 1.5 to 2.5 cubic feet per second per square mile of drainage area that borders the aquifer. Stream losses from tributaries that drain the uplands commonly range from 0.3 to 1.5 cubic feet per second per 1,000 feet of wetted channel where the tributaries cross alluvial fans in the main valleys. Recharge of valley-fill aquifers from channeled runoff was estimated from measured losses and average runoff rates and was represented in aquifer models as specified fluxes or simulated by head-dependent fluxes with streamflow routing in the model cells that represent the tributary streams. Unchanneled upland runoff, which includes overland and subsurface flow, recharges the valley-fill aquifers at the contact between the aquifer and uplands near the base of the bordering till-covered hillslopes. Recharge from unchanneled runoff was estimated from average runoff rates and the hillslope area that borders the aquifer and was represented as specified fluxes to model-boundary cells along the valley walls.
Source of water to Lithia Springs in Hillsborough County, Florida
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hickey, J.J.; Coates, M.J.
1993-03-01
The source of water to Lithia Springs adjacent to the Alafia River in Hillsborough County, Florida has traditionally been hypothesized to be from the Upper Floridan aquifer. As a result, potential impacts from an adjacent public supply well field has been of interest since the well field began production in July, 1988. The discharge from Lithia Springs since March, 1984 has averaged about 3,600,000 cubic feet per day. Pumpage from the adjacent well field since July, 1988 has averaged about 2,500,000 cubic feet per day. A comparison between mean daily pumpage from the well field and mean daily discharge frommore » the springs showed no apparent association indicating that the Floridan aquifer may not be the source for the springs. Lithologic data suggested that the Upper Floridan aquifer was confined with no direct connection to the springs. This confining unit hypothesis was tested and accepted by pumping two wells close to the springs. The test consisted of pumping both wells for about 13 days at a combined rate that was about 40% of the average daily well field pumpage. No discernable test caused effects were observed on the springs or in an adjacent 115-foot deep well open to carbonate rock. Because of this, it was concluded that the Upper Floridan aquifer was not the source of water to Lithia Springs. Interpretation of available data suggested that the source of water to Lithia Springs was from the intermediate aquifer system located within solution riddled Early Miocene carbonate rocks of the lower Hawthorn Formation with maybe an important aquifer contribution from the Alafia River.« less
Hydrographic and sedimentation survey of Kajakai Reservoir, Afghanistan
Perkins, Don C.; Culbertson, James K.
1970-01-01
A hydrographic and sedimentation survey of Band-e Kajakai (Kajakai Reservoir) on the Darya-ye Hirmand (Helmand River) was carried out during the period September through December 1968. Underwater mapping techniques were used to determine the reservoir capacity as of 1968. Sediment range lines were established and monumented to facilitate future sedimentation surveys. Afghanistan engineers and technicians were trained to carry out future reservoir surveys. Samples were obtained of the reservoir bed and in the river upstream from the reservoir. Virtually no sediments coarser than about 0.063 millimeter were found on the reservoir bed surface. The median diameter of sands being transported into the reservoir ranged from 0.040 to 0.110 millimeter. The average annual rate of sedimentation was 7,800 acre-feet. Assuming an average density of 50 pounds per cubic foot (800 kilograms per cubic meter), the estimated average sediment inflow to the reservoir was about 8,500,000 tons (7,700,000 metric tons) per year. The decrease in capacity at spillway elevation for the period 1953 to 1968 due to sediment deposition was 7.8 percent, or 117,700 acre-feet. Redefinition of several contours above the fill area resulted in an increase in capacity at spillway elevation of 13,600 acre-feet; thus, the net change in capacity was 7.0 percent, or 104,800 acre-feet. Based on current data and an estimated rate of compaction of deposited sediment, the assumption of no appreciable change in hydrologic conditions in the drainage area, the leading edge of the principal delta will reach the irrigation outlet in 40-45 years. It is recommended that a resurvey of sediment range lines be made during the period 1973-75.
Hydrology of the Floridan Aquifer in Northwest Volusia County, Florida
Rutledge, A.T.
1982-01-01
Northwest Volusia County, in east-central Florida, is a 262-square-mile area including the southern part of the Crescent City Ridge and the northern tip of the DeLand Ridge. The hydrogeologic units in the area include the Floridan aquifer, which is made up of parts of the Lake City Limestone, the Avon Park Limestone, and the Ocala Limestone, all of Eocene age; the confining bed, which is composed of clays of Miocene or Pliocene age; and the surficial aquifer, which is made up of Pleistocene and Holocene sands. Ornamental fern growing is a $12 million per year industry in northwest Volusia County. Fern culture requires a large amount of good-quality water for irrigation, and more significantly, a large water withdrawal rate for freeze protection during winter months. The source of most water used is the Floridan aquifer. The large irrigation withdrawals, especially in winter months when spray irrigation is used for freeze protection of ferns, introduce problems such as the potential for saltwater intrusion, the temporary loss of water in domestic wells caused by large potentiometric drawdown, and increased sinkhole activity. The water budget of the surficial layer consists of 55 inches per year rainfall, 39 inches per year evapotranspiration, 13 inches per year runoff, and a net downward leakage of 3 inches per year. Average ground-water irrigational withdrawal is 8.1 million gallons per day, while the peak withdrawal rate is 300 million gallons per day during freeze-protection pumpage. The average irrigation well depth exceeds 300 feet. Transmissivities of the Floridan aquifer range from 4,500 to 160,000 feet squared per day. Highest transmissivities are in the DeLeon Springs area and the lowest are in the east Pierson area. Storage coefficients range from 0.0003 to 0.0013. The water budget of the Floridan aquifer under present conditions of withdrawal consists of 108 cubic feet per second recharge, 2 cubic feet per second horizontal ground-water inflow, 34 cubic feet per second direct discharge, 40 cubic feet per second upward leakage, 22 cubic feet per second horizontal outflow, and 14 cubic feet per second pumpage. The Floridan aquifer contains good-quality water in most of the study area, but also contains brackish water underneath the stressed zones and in the upper zones along the western and southern limits of the area. The altitude of the fresh- saltwater interface varies in the area from 1,500 to 300 feet below sea level. Areal drawdowns in the fern-growing areas of Pierson are 5 feet during growth irrigation periods and 20 to 30 feet during freeze-protection withdrawals. The drawdown in the Pierson area at the end of one intense period of pumpage exceeded 30 feet over a 4.4-square-mile area. A significant amount of the withdrawn water was replaced by leakage during the pumping period. Drawdowns in some pumping wells in northeast Pierson exceed 90 feet during freeze-protection withdrawals. No long-term residual drawdown has occurred. The predominant effect of pumpage on the water budget of the Floridan aquifer has been an increase in recharge. Sinkhole activity has been increased by the temporary increase in load on the aquifer's skeletal structure during intense lowering of the potentiometric surface. There is no evidence of saltwater intrusion, but a monitoring network for future early detection is suggested.
Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.
2017-02-20
Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.
Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less
Real-time correction of tsunami site effect by frequency-dependent tsunami-amplification factor
NASA Astrophysics Data System (ADS)
Tsushima, H.
2017-12-01
For tsunami early warning, I developed frequency-dependent tsunami-amplification factor and used it to design a recursive digital filter that can be applicable for real-time correction of tsunami site response. In this study, I assumed that a tsunami waveform at an observing point could be modeled by convolution of source, path and site effects in time domain. Under this assumption, spectral ratio between offshore and the nearby coast can be regarded as site response (i.e. frequency-dependent amplification factor). If the amplification factor can be prepared before tsunamigenic earthquakes, its temporal convolution to offshore tsunami waveform provides tsunami prediction at coast in real time. In this study, tsunami waveforms calculated by tsunami numerical simulations were used to develop frequency-dependent tsunami-amplification factor. Firstly, I performed numerical tsunami simulations based on nonlinear shallow-water theory from many tsuanmigenic earthquake scenarios by varying the seismic magnitudes and locations. The resultant tsunami waveforms at offshore and the nearby coastal observing points were then used in spectral-ratio analysis. An average of the resulted spectral ratios from the tsunamigenic-earthquake scenarios is regarded as frequency-dependent amplification factor. Finally, the estimated amplification factor is used in design of a recursive digital filter that can be applicable in time domain. The above procedure is applied to Miyako bay at the Pacific coast of northeastern Japan. The averaged tsunami-height spectral ratio (i.e. amplification factor) between the location at the center of the bay and the outside show a peak at wave-period of 20 min. A recursive digital filter based on the estimated amplification factor shows good performance in real-time correction of tsunami-height amplification due to the site effect. This study is supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant 15K16309.
Norman, Berk; Pedoia, Valentina; Majumdar, Sharmila
2018-03-27
Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1 ρ -weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations' ability to quantify, in a longitudinally repeatable way, relaxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1 ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA. © RSNA, 2018 Online supplemental material is available for this article.
An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm
NASA Astrophysics Data System (ADS)
Jacques, Robert; McNutt, Todd
2014-03-01
Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.