Science.gov

Sample records for double convolution model

  1. Convolution models for induced electromagnetic responses

    PubMed Central

    Litvak, Vladimir; Jha, Ashwani; Flandin, Guillaume; Friston, Karl

    2013-01-01

    In Kilner et al. [Kilner, J.M., Kiebel, S.J., Friston, K.J., 2005. Applications of random field theory to electrophysiology. Neurosci. Lett. 374, 174–178.] we described a fairly general analysis of induced responses—in electromagnetic brain signals—using the summary statistic approach and statistical parametric mapping. This involves localising induced responses—in peristimulus time and frequency—by testing for effects in time–frequency images that summarise the response of each subject to each trial type. Conventionally, these time–frequency summaries are estimated using post‐hoc averaging of epoched data. However, post‐hoc averaging of this sort fails when the induced responses overlap or when there are multiple response components that have variable timing within each trial (for example stimulus and response components associated with different reaction times). In these situations, it is advantageous to estimate response components using a convolution model of the sort that is standard in the analysis of fMRI time series. In this paper, we describe one such approach, based upon ordinary least squares deconvolution of induced responses to input functions encoding the onset of different components within each trial. There are a number of fundamental advantages to this approach: for example; (i) one can disambiguate induced responses to stimulus onsets and variably timed responses; (ii) one can test for the modulation of induced responses—over peristimulus time and frequency—by parametric experimental factors and (iii) one can gracefully handle confounds—such as slow drifts in power—by including them in the model. In what follows, we consider optimal forms for convolution models of induced responses, in terms of impulse response basis function sets and illustrate the utility of deconvolution estimators using simulated and real MEG data. PMID:22982359

  2. Model Convolution: A Computational Approach to Digital Image Interpretation.

    PubMed

    Gardner, Melissa K; Sprague, Brian L; Pearson, Chad G; Cosgrove, Benjamin D; Bicek, Andrew D; Bloom, Kerry; Salmon, E D; Odde, David J

    2010-06-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called "model-convolution," which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory.

  3. Model Convolution: A Computational Approach to Digital Image Interpretation

    PubMed Central

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  4. A model of traffic signs recognition with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  5. Surrogacy theory and models of convoluted organic systems.

    PubMed

    Konopka, Andrzej K

    2007-03-01

    The theory of surrogacy is briefly outlined as one of the conceptual foundations of systems biology that has been developed for the last 30 years in the context of Hertz-Rosen modeling relationship. Conceptual foundations of modeling convoluted (biologically complex) systems are briefly reviewed and discussed in terms of current and future research in systems biology. New as well as older results that pertain to the concepts of modeling relationship, sequence of surrogacies, cascade of representations, complementarity, analogy, metaphor, and epistemic time are presented together with a classification of models in a cascade. Examples of anticipated future applications of surrogacy theory in life sciences are briefly discussed.

  6. A digital model for streamflow routing by convolution methods

    USGS Publications Warehouse

    Doyle, W.H.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.

    1984-01-01

    U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)

  7. An effective convolutional neural network model for Chinese sentiment analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Chen, Mengdong; Liu, Lianzhong; Wang, Yadong

    2017-06-01

    Nowadays microblog is getting more and more popular. People are increasingly accustomed to expressing their opinions on Twitter, Facebook and Sina Weibo. Sentiment analysis of microblog has received significant attention, both in academia and in industry. So far, Chinese microblog exploration still needs lots of further work. In recent years CNN has also been used to deal with NLP tasks, and already achieved good results. However, these methods ignore the effective use of a large number of existing sentimental resources. For this purpose, we propose a Lexicon-based Sentiment Convolutional Neural Networks (LSCNN) model focus on Weibo's sentiment analysis, which combines two CNNs, trained individually base on sentiment features and word embedding, at the fully connected hidden layer. The experimental results show that our model outperforms the CNN model only with word embedding features on microblog sentiment analysis task.

  8. A convolution model of rock bed thermal storage units

    NASA Astrophysics Data System (ADS)

    Sowell, E. F.; Curry, R. L.

    1980-01-01

    A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.

  9. Modeling Task fMRI Data via Deep Convolutional Autoencoder.

    PubMed

    Huang, Heng; Hu, Xintao; Zhao, Yu; Makkie, Milad; Dong, Qinglin; Zhao, Shijie; Guo, Lei; Liu, Tianming

    2017-06-15

    Task-based fMRI (tfMRI) has been widely used to study functional brain networks under task performance. Modeling tfMRI data is challenging due to at least two problems: the lack of the ground truth of underlying neural activity and the highly complex intrinsic structure of tfMRI data. To better understand brain networks based on fMRI data, data-driven approaches have been proposed, for instance, Independent Component Analysis (ICA) and Sparse Dictionary Learning (SDL). However, both ICA and SDL only build shallow models, and they are under the strong assumption that original fMRI signal could be linearly decomposed into time series components with their corresponding spatial maps. As growing evidence shows that human brain function is hierarchically organized, new approaches that can infer and model the hierarchical structure of brain networks are widely called for. Recently, deep convolutional neural network (CNN) has drawn much attention, in that deep CNN has proven to be a powerful method for learning high-level and mid-level abstractions from low-level raw data. Inspired by the power of deep CNN, in this study, we developed a new neural network structure based on CNN, called Deep Convolutional Auto-Encoder (DCAE), in order to take the advantages of both data-driven approach and CNN's hierarchical feature abstraction ability for the purpose of learning mid-level and high-level features from complex, large-scale tfMRI time series in an unsupervised manner. The DCAE has been applied and tested on the publicly available human connectome project (HCP) tfMRI datasets, and promising results are achieved.

  10. Scene text detection via extremal region based double threshold convolutional network classification

    PubMed Central

    Zhu, Wei; Lou, Jing; Chen, Longtao; Xia, Qingyuan

    2017-01-01

    In this paper, we present a robust text detection approach in natural images which is based on region proposal mechanism. A powerful low-level detector named saliency enhanced-MSER extended from the widely-used MSER is proposed by incorporating saliency detection methods, which ensures a high recall rate. Given a natural image, character candidates are extracted from three channels in a perception-based illumination invariant color space by saliency-enhanced MSER algorithm. A discriminative convolutional neural network (CNN) is jointly trained with multi-level information including pixel-level and character-level information as character candidate classifier. Each image patch is classified as strong text, weak text and non-text by double threshold filtering instead of conventional one-step classification, leveraging confident scores obtained via CNN. To further prune non-text regions, we develop a recursive neighborhood search algorithm to track credible texts from weak text set. Finally, characters are grouped into text lines using heuristic features such as spatial location, size, color, and stroke width. We compare our approach with several state-of-the-art methods, and experiments show that our method achieves competitive performance on public datasets ICDAR 2011 and ICDAR 2013. PMID:28820891

  11. Scene text detection via extremal region based double threshold convolutional network classification.

    PubMed

    Zhu, Wei; Lou, Jing; Chen, Longtao; Xia, Qingyuan; Ren, Mingwu

    2017-01-01

    In this paper, we present a robust text detection approach in natural images which is based on region proposal mechanism. A powerful low-level detector named saliency enhanced-MSER extended from the widely-used MSER is proposed by incorporating saliency detection methods, which ensures a high recall rate. Given a natural image, character candidates are extracted from three channels in a perception-based illumination invariant color space by saliency-enhanced MSER algorithm. A discriminative convolutional neural network (CNN) is jointly trained with multi-level information including pixel-level and character-level information as character candidate classifier. Each image patch is classified as strong text, weak text and non-text by double threshold filtering instead of conventional one-step classification, leveraging confident scores obtained via CNN. To further prune non-text regions, we develop a recursive neighborhood search algorithm to track credible texts from weak text set. Finally, characters are grouped into text lines using heuristic features such as spatial location, size, color, and stroke width. We compare our approach with several state-of-the-art methods, and experiments show that our method achieves competitive performance on public datasets ICDAR 2011 and ICDAR 2013.

  12. A fast double template convolution isocenter evaluation algorithm with subpixel accuracy.

    PubMed

    Winey, Brian; Sharp, Greg; Bussière, Marc

    2011-01-01

    To design a fast Winston Lutz (fWL) algorithm for accurate analysis of radiation isocenter from images without edge detection or center of mass calculations. An algorithm has been developed to implement the Winston Lutz test for mechanical/ radiation isocenter agreement using an electronic portal imaging device (EPID). The algorithm detects the position of the radiation shadow of a tungsten ball within a stereotactic cone. The fWL algorithm employs a double convolution to independently find the position of the sphere and cone centers. Subpixel estimation is used to achieve high accuracy. Results of the algorithm were compared to (1) a human observer with template guidance and (2) an edge detection/center of mass (edCOM) algorithm. Testing was performed with high resolution (0.05 mm/px, film) and low resolution (0.78 mm/px, EPID) image sets. Sphere and cone center relative positions were calculated with the fWL algorithm for high resolution test images with an accuracy of 0.002 +/- 0.061 mm compared to 0.042 +/- 0.294 mm for the human observer, and 0.003 +/- 0.038 mm for the edCOM algorithm. The fWL algorithm required 0.01 s per image compared to 5 s for the edCOM algorithm and 20 s for the human observer. For lower resolution images the fWL algorithm localized the centers with an accuracy of 0.083 +/- 0.12 mm compared to 0.03 +/- 0.5514 mm for the edCOM algorithm. A fast (subsecond) subpixel algorithm has been developed that can accurately determine the center locations of the ball and cone in Winston Lutz test images without edge detection or COM calculations.

  13. A fast double template convolution isocenter evaluation algorithm with subpixel accuracy

    SciTech Connect

    Winey, Brian; Sharp, Greg; Bussiere, Marc

    2011-01-15

    Purpose: To design a fast Winston Lutz (fWL) algorithm for accurate analysis of radiation isocenter from images without edge detection or center of mass calculations. Methods: An algorithm has been developed to implement the Winston Lutz test for mechanical/radiation isocenter agreement using an electronic portal imaging device (EPID). The algorithm detects the position of the radiation shadow of a tungsten ball within a stereotactic cone. The fWL algorithm employs a double convolution to independently find the position of the sphere and cone centers. Subpixel estimation is used to achieve high accuracy. Results of the algorithm were compared to (1) a human observer with template guidance and (2) an edge detection/center of mass (edCOM) algorithm. Testing was performed with high resolution (0.05mm/px, film) and low resolution (0.78mm/px, EPID) image sets. Results: Sphere and cone center relative positions were calculated with the fWL algorithm for high resolution test images with an accuracy of 0.002{+-}0.061 mm compared to 0.042{+-}0.294 mm for the human observer, and 0.003{+-}0.038 mm for the edCOM algorithm. The fWL algorithm required 0.01 s per image compared to 5 s for the edCOM algorithm and 20 s for the human observer. For lower resolution images the fWL algorithm localized the centers with an accuracy of 0.083{+-}0.12 mm compared to 0.03{+-}0.5514 mm for the edCOM algorithm. Conclusions: A fast (subsecond) subpixel algorithm has been developed that can accurately determine the center locations of the ball and cone in Winston Lutz test images without edge detection or COM calculations.

  14. A fuzzy convolution model for radiobiologically optimized radiotherapy margins.

    PubMed

    Mzenda, Bongile; Hosseini-Ashrafi, Mir; Gegov, Alex; Brown, David J

    2010-06-07

    In this study we investigate the use of a new knowledge-based fuzzy logic technique to derive radiotherapy margins based on radiotherapy uncertainties and their radiobiological effects. The main radiotherapy uncertainties considered and used to build the model were delineation, set-up and organ motion-induced errors. The radiobiological effects of these combined errors, in terms of prostate tumour control probability and rectal normal tissue complication probability, were used to formulate the rule base and membership functions for a Sugeno type fuzzy system linking the error effect to the treatment margin. The defuzzified output was optimized by convolving it with a Gaussian convolution kernel to give a uniformly varying transfer function which was used to calculate the required treatment margins. The margin derived using the fuzzy technique showed good agreement compared to current prostate margins based on the commonly used margin formulation proposed by van Herk et al (2000 Int. J. Radiat. Oncol. Biol. Phys. 47 1121-35), and has nonlinear variation above combined errors of 5 mm standard deviation. The derived margin is on average 0.5 mm bigger than currently used margins in the region of small treatment uncertainties where margin reduction would be applicable. The new margin was applied in an intensity modulated radiotherapy prostate treatment planning example where margin reduction and a dose escalation regime were implemented, and by inducing equivalent treatment uncertainties, the resulting target and organs at risk doses were found to compare well to results obtained using currently recommended margins.

  15. Forecasting natural aquifer discharge using a numerical model and convolution.

    PubMed

    Boggs, Kevin G; Johnson, Gary S; Van Kirk, Rob; Fairley, Jerry P

    2014-01-01

    If the nature of groundwater sources and sinks can be determined or predicted, the data can be used to forecast natural aquifer discharge. We present a procedure to forecast the relative contribution of individual aquifer sources and sinks to natural aquifer discharge. Using these individual aquifer recharge components, along with observed aquifer heads for each January, we generate a 1-year, monthly spring discharge forecast for the upcoming year with an existing numerical model and convolution. The results indicate that a forecast of natural aquifer discharge can be developed using only the dominant aquifer recharge sources combined with the effects of aquifer heads (initial conditions) at the time the forecast is generated. We also estimate how our forecast will perform in the future using a jackknife procedure, which indicates that the future performance of the forecast is good (Nash-Sutcliffe efficiency of 0.81). We develop a forecast and demonstrate important features of the procedure by presenting an application to the Eastern Snake Plain Aquifer in southern Idaho.

  16. Designing the optimal convolution kernel for modeling the motion blur

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2011-06-01

    Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.

  17. A new model of the distal convoluted tubule

    PubMed Central

    Ko, Benjamin; Mistry, Abinash C.; Hanson, Lauren; Mallick, Rickta; Cooke, Leslie L.; Hack, Bradley K.; Cunningham, Patrick

    2012-01-01

    The Na+-Cl− cotransporter (NCC) in the distal convoluted tubule (DCT) of the kidney is a key determinant of Na+ balance. Disturbances in NCC function are characterized by disordered volume and blood pressure regulation. However, many details concerning the mechanisms of NCC regulation remain controversial or undefined. This is partially due to the lack of a mammalian cell model of the DCT that is amenable to functional assessment of NCC activity. Previously reported investigations of NCC regulation in mammalian cells have either not attempted measurements of NCC function or have required perturbation of the critical without a lysine kinase (WNK)/STE20/SPS-1-related proline/alanine-rich kinase regulatory pathway before functional assessment. Here, we present a new mammalian model of the DCT, the mouse DCT15 (mDCT15) cell line. These cells display native NCC function as measured by thiazide-sensitive, Cl−-dependent 22Na+ uptake and allow for the separate assessment of NCC surface expression and activity. Knockdown by short interfering RNA confirmed that this function was dependent on NCC protein. Similar to the mammalian DCT, these cells express many of the known regulators of NCC and display significant baseline activity and dimerization of NCC. As described in previous models, NCC activity is inhibited by appropriate concentrations of thiazides, and phorbol esters strongly suppress function. Importantly, they display release of WNK4 inhibition of NCC by small hairpin RNA knockdown. We feel that this new model represents a critical tool for the study of NCC physiology. The work that can be accomplished in such a system represents a significant step forward toward unraveling the complex regulation of NCC. PMID:22718890

  18. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  19. Plasma Spectroscopy of the Double Post-Hole Convolute on Sandia's Z-Machine*

    NASA Astrophysics Data System (ADS)

    Gomez, Matthew; Gilgenbach, Ron; Cuneo, Mike; Lopez, Mike; Rochau, Greg; McBride, Ryan; Bailey, Jim; Lake, Pat; Maron, Yitzhak

    2010-11-01

    In large-scale pulsed power systems, post-hole convolutes combine current from several magnetically insulated transmission lines just before the load. Current losses in the convolute and the final feed gap on the Z-Machine have been measured in some cases to be as high as 10-20%. The goal of these experiments is to characterize plasma conditions in the convolute in an attempt to correlate the plasma formation with current losses. Preliminary data show sharp onset of strong continuum emission and a number of spectral-line absorption features. LiF was deposited onto convolute components as a localized dopant to confirm the origin of the these emissions. Experimental results as well as simulated spectra from PrismSpect will be presented. *MRG sponsored by SSGF through NNSA. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the US DOE's NNSA under contract DE-AC04-94AL85000.

  20. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  1. Gamma convolution models for self-diffusion coefficient distributions in PGSE NMR.

    PubMed

    Röding, Magnus; Williamson, Nathan H; Nydén, Magnus

    2015-12-01

    We introduce a closed-form signal attenuation model for pulsed-field gradient spin echo (PGSE) NMR based on self-diffusion coefficient distributions that are convolutions of n gamma distributions, n⩾1. Gamma convolutions provide a general class of uni-modal distributions that includes the gamma distribution as a special case for n=1 and the lognormal distribution among others as limit cases when n approaches infinity. We demonstrate the usefulness of the gamma convolution model by simulations and experimental data from samples of poly(vinyl alcohol) and polystyrene, showing that this model provides goodness of fit superior to both the gamma and lognormal distributions and comparable to the common inverse Laplace transform. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Discretization of continuous convolution operators for accurate modeling of wave propagation in digital holography.

    PubMed

    Chacko, Nikhil; Liebling, Michael; Blu, Thierry

    2013-10-01

    Discretization of continuous (analog) convolution operators by direct sampling of the convolution kernel and use of fast Fourier transforms is highly efficient. However, it assumes the input and output signals are band-limited, a condition rarely met in practice, where signals have finite support or abrupt edges and sampling is nonideal. Here, we propose to approximate signals in analog, shift-invariant function spaces, which do not need to be band-limited, resulting in discrete coefficients for which we derive discrete convolution kernels that accurately model the analog convolution operator while taking into account nonideal sampling devices (such as finite fill-factor cameras). This approach retains the efficiency of direct sampling but not its limiting assumption. We propose fast forward and inverse algorithms that handle finite-length, periodic, and mirror-symmetric signals with rational sampling rates. We provide explicit convolution kernels for computing coherent wave propagation in the context of digital holography. When compared to band-limited methods in simulations, our method leads to fewer reconstruction artifacts when signals have sharp edges or when using nonideal sampling devices.

  3. The Brain's Representations May Be Compatible With Convolution-Based Memory Models.

    PubMed

    Kato, Kenichi; Caplan, Jeremy B

    2017-02-13

    Convolution is a mathematical operation used in vector-models of memory that have been successful in explaining a broad range of behaviour, including memory for associations between pairs of items, an important primitive of memory upon which a broad range of everyday memory behaviour depends. However, convolution models have trouble with naturalistic item representations, which are highly auto-correlated (as one finds, e.g., with photographs), and this has cast doubt on their neural plausibility. Consequently, modellers working with convolution have used item representations composed of randomly drawn values, but introducing so-called noise-like representation raises the question how those random-like values might relate to actual item properties. We propose that a compromise solution to this problem may already exist. It has also long been known that the brain tends to reduce auto-correlations in its inputs. For example, centre-surround cells in the retina approximate a Difference-of-Gaussians (DoG) transform. This enhances edges, but also turns natural images into images that are closer to being statistically like white noise. We show the DoG-transformed images, although not optimal compared to noise-like representations, survive the convolution model better than naturalistic images. This is a proof-of-principle that the pervasive tendency of the brain to reduce auto-correlations may result in representations of information that are already adequately compatible with convolution, supporting the neural plausibility of convolution-based association-memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    PubMed

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration.

  5. Transient electromagnetic modeling of the ZR accelerator water convolute and stack.

    SciTech Connect

    Lehr, Jane Marie; Elizondo-Decanini, Juan Manuel; Turner, C. David; Coats, Rebecca Sue; Bohnhoff, William J.; Pointon, Timothy David; Pasik, Michael Francis; Johnson, William Arthur; Savage, Mark Edward

    2005-06-01

    The ZR accelerator is a refurbishment of Sandia National Laboratories Z accelerator [1]. The ZR accelerator components were designed using electrostatic and circuit modeling tools. Transient electromagnetic modeling has played a complementary role in the analysis of ZR components [2]. In this paper we describe a 3D transient electromagnetic analysis of the ZR water convolute and stack using edge-based finite element techniques.

  6. Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)

    NASA Astrophysics Data System (ADS)

    Long, A. J.

    2009-12-01

    Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.

  7. Full Waveform Modeling of Transient Electromagnetic Response Based on Temporal Interpolation and Convolution Method

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang

    2017-07-01

    Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.

  8. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    NASA Astrophysics Data System (ADS)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-07-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  9. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  10. The Gaussian streaming model and convolution Lagrangian effective field theory

    DOE PAGES

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin

    2016-12-05

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less

  11. The Gaussian streaming model and convolution Lagrangian effective field theory

    SciTech Connect

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin

    2016-12-05

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.

  12. The Gaussian streaming model and convolution Lagrangian effective field theory

    NASA Astrophysics Data System (ADS)

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin

    2016-12-01

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.

  13. Implementation of FFT convolution and multigrid superposition models in the FOCUS RTP system

    NASA Astrophysics Data System (ADS)

    Miften, Moyed; Wiesmeyer, Mark; Monthofer, Suzanne; Krippner, Ken

    2000-04-01

    In radiotherapy treatment planning, convolution/superposition algorithms currently represent the best practical approach for accurate photon dose calculation in heterogeneous tissues. In this work, the implementation, accuracy and performance of the FFT convolution (FFTC) and multigrid superposition (MGS) algorithms are presented. The FFTC and MGS models use the same `TERMA' calculation and are commissioned using the same parameters. Both models use the same spectra, incorporate the same off-axis softening and base incident lateral fluence on the same measurements. In addition, corrections are explicitly applied to the polyenergetic and parallel kernel approximations, and electron contamination is modelled. Spectra generated by Monte Carlo (MC) modelling of treatment heads are used. Calculations using the MC spectra were in excellent agreement with measurements for many linear accelerator types. To speed up the calculations, a number of calculation techniques were implemented, including separate primary and scatter dose calculation, the FFT technique which assumes kernel invariance for the convolution calculation and a multigrid (MG) acceleration technique for the superposition calculation. Timing results show that the FFTC model is faster than MGS by a factor of 4 and 8 for small and large field sizes, respectively. Comparisons with measured data and BEAM MC results for a wide range of clinical beam setups show that (a) FFTC and MGS doses match measurements to better than 2% or 2 mm in homogeneous media; (b) MGS is more accurate than FFTC in lung phantoms where MGS doses are within 3% or 3 mm of BEAM results and (c) FFTC overestimates the dose in lung by a maximum of 9% compared to BEAM.

  14. Models For Diffracting Aperture Identification : A Comparison Between Ideal And Convolutional Observations

    NASA Astrophysics Data System (ADS)

    Crosta, Giovanni

    1983-09-01

    We consider a number of inverse diffraction problems where different models are compared. Ideal measurements yield Cauchy data , to which corresponds a unique solution. If a convolutional observation map is chosen, uniqueness can no longer be insured. We also briefly examine a non-linear non-invertible observation map , which describes a quadratic detector. In all of these cases we discuss the link between aperture identification and optimal control theory , which leads to regularised functional minimisation. This task can be performed by a discrete gradient algorithm of which we give the flow chart.

  15. Comparing multilevel and multiscale convolution models for small area aggregated health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2017-08-01

    In spatial epidemiology, data are often arrayed hierarchically. The classification of individuals into smaller units, which in turn are grouped into larger units, can induce contextual effects. On the other hand, a scaling effect can occur due to the aggregation of data from smaller units into larger units. In this paper, we propose a shared multilevel model to address the contextual effects. In addition, we consider a shared multiscale model to adjust for both scale and contextual effects simultaneously. We also study convolution and independent multiscale models, which are special cases of shared multilevel and shared multiscale models, respectively. We compare the performance of the models by applying them to real and simulated data sets. We found that the shared multiscale model was the best model across a range of simulated and real scenarios as measured by the deviance information criterion (DIC) and the Watanabe Akaike information criterion (WAIC). Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    PubMed

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  17. Dispersion-convolution model for simulating peaks in a flow injection system.

    PubMed

    Pai, Su-Cheng; Lai, Yee-Hwong; Chiao, Ling-Yun; Yu, Tiing

    2007-01-12

    A dispersion-convolution model is proposed for simulating peak shapes in a single-line flow injection system. It is based on the assumption that an injected sample plug is expanded due to a "bulk" dispersion mechanism along the length coordinate, and that after traveling over a distance or a period of time, the sample zone will develop into a Gaussian-like distribution. This spatial pattern is further transformed to a temporal coordinate by a convolution process, and finally a temporal peak image is generated. The feasibility of the proposed model has been examined by experiments with various coil lengths, sample sizes and pumping rates. An empirical dispersion coefficient (D*) can be estimated by using the observed peak position, height and area (tp*, h* and At*) from a recorder. An empirical temporal shift (Phi*) can be further approximated by Phi*=D*/u2, which becomes an important parameter in the restoration of experimental peaks. Also, the dispersion coefficient can be expressed as a second-order polynomial function of the pumping rate Q, for which D*(Q)=delta0+delta1Q+delta2Q2. The optimal dispersion occurs at a pumping rate of Qopt=sqrt[delta0/delta2]. This explains the interesting "Nike-swoosh" relationship between the peak height and pumping rate. The excellent coherence of theoretical and experimental peak shapes confirms that the temporal distortion effect is the dominating reason to explain the peak asymmetry in flow injection analysis.

  18. Assessing the Firing Properties of the Electrically Stimulated Auditory Nerve Using a Convolution Model.

    PubMed

    Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib

    2016-01-01

    The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about the health of the auditory nerve. In this study, guinea pigs with various degrees of nerve degeneration were used to directly relate firing properties to nerve histology. The same convolution model was applied on human eCAPs to examine similarities and ultimately to examine its clinical applicability. For most eCAPs, the estimated nerve firing probability was bimodal and could be parameterised by two Gaussian distributions with an average latency difference of 0.4 ms. The ratio of the scaling factors of the late and early component increased with neural degeneration in the guinea pig. This ratio decreased with stimulation intensity in humans. The latency of the early component decreased with neural degeneration in the guinea pig. Indirectly, this was observed in humans as well, assuming that the cochlear base exhibits more neural degeneration than the apex. Differences between guinea pigs and humans were observed, among other parameters, in the width of the early component: very robust in guinea pig, and dependent on stimulation intensity and cochlear region in humans. We conclude that the deconvolution of the eCAP is a valuable addition to existing analyses, in particular as it reveals two separate firing components in the auditory nerve.

  19. Revision of the theory of tracer transport and the convolution model of dynamic contrast enhanced magnetic resonance imaging

    PubMed Central

    Bammer, Roland; Stollberger, Rudolf

    2012-01-01

    Counterexamples are used to motivate the revision of the established theory of tracer transport. Then dynamic contrast enhanced magnetic resonance imaging in particular is conceptualized in terms of a fully distributed convection–diffusion model from which a widely used convolution model is derived using, alternatively, compartmental discretizations or semigroup theory. On this basis, applications and limitations of the convolution model are identified. For instance, it is proved that perfusion and tissue exchange states cannot be identified on the basis of a single convolution equation alone. Yet under certain assumptions, particularly that flux is purely convective at the boundary of a tissue region, physiological parameters such as mean transit time, effective volume fraction, and volumetric flow rate per unit tissue volume can be deduced from the kernel. PMID:17429633

  20. A convolutional code-based sequence analysis model and its application.

    PubMed

    Liu, Xiao; Geng, Xiaoli

    2013-04-16

    A new approach for encoding DNA sequences as input for DNA sequence analysis is proposed using the error correction coding theory of communication engineering. The encoder was designed as a convolutional code model whose generator matrix is designed based on the degeneracy of codons, with a codon treated in the model as an informational unit. The utility of the proposed model was demonstrated through the analysis of twelve prokaryote and nine eukaryote DNA sequences having different GC contents. Distinct differences in code distances were observed near the initiation and termination sites in the open reading frame, which provided a well-regulated characterization of the DNA sequences. Clearly distinguished period-3 features appeared in the coding regions, and the characteristic average code distances of the analyzed sequences were approximately proportional to their GC contents, particularly in the selected prokaryotic organisms, presenting the potential utility as an added taxonomic characteristic for use in studying the relationships of living organisms.

  1. SU-E-T-08: A Convolution Model for Head Scatter Fluence in the Intensity Modulated Field

    SciTech Connect

    Chen, M; Mo, X; Chen, Y; Parnell, D; Key, S; Olivera, G; Galmarini, W; Lu, W

    2014-06-01

    Purpose: To efficiently calculate the head scatter fluence for an arbitrary intensity-modulated field with any source distribution using the source occlusion model. Method: The source occlusion model with focal and extra focal radiation (Jaffray et al, 1993) can be used to account for LINAC head scatter. In the model, the fluence map of any field shape at any point can be calculated via integration of the source distribution within the visible range, as confined by each segment, using the detector eye's view. A 2D integration would be required for each segment and each fluence plane point, which is time-consuming, as an intensity-modulated field contains typically tens to hundreds of segments. In this work, we prove that the superposition of the segmental integrations is equivalent to a simple convolution regardless of what the source distribution is. In fact, for each point, the detector eye's view of the field shape can be represented as a function with the origin defined at the point's pinhole reflection through the center of the collimator plane. We were thus able to reduce hundreds of source plane integration to one convolution. We calculated the fluence map for various 3D and IMRT beams and various extra-focal source distributions using both the segmental integration approach and the convolution approach and compared the computation time and fluence map results of both approaches. Results: The fluence maps calculated using the convolution approach were the same as those calculated using the segmental approach, except for rounding errors (<0.1%). While it took considerably longer time to calculate all segmental integrations, the fluence map calculation using the convolution approach took only ∼1/3 of the time for typical IMRT fields with ∼100 segments. Conclusions: The convolution approach for head scatter fluence calculation is fast and accurate and can be used to enhance the online process.

  2. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    SciTech Connect

    Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd

    2011-01-15

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3

  3. Convolution of degrees of coherence.

    PubMed

    Korotkova, Olga; Mei, Zhangrong

    2015-07-01

    The conditions under which convolution of two degrees of coherence represents a novel legitimate degree of coherence are established for wide-sense statistically stationary Schell-model beam-like optical fields. Several examples are given to illustrate how convolution can be used for generation of a far field being a modulated version of another one. Practically, the convolutions of the degrees of coherence can be achieved by programming the liquid crystal spatial light modulators.

  4. Renormalization plus convolution method for atomic-scale modeling of electrical and thermal transport in nanowires.

    PubMed

    Wang, Chumin; Salazar, Fernando; Sánchez, Vicenta

    2008-12-01

    Based on the Kubo-Greenwood formula, the transport of electrons and phonons in nanowires is studied by means of a real-space renormalization plus convolution method. This method has the advantage of being efficient, without introducing additional approximations and capable to analyze nanowires of a wide range of lengths even with defects. The Born and tight-binding models are used to investigate the lattice thermal and electrical conductivities, respectively. The results show a quantized electrical dc conductance, which is attenuated when an oscillating electric field is applied. Effects of single and multiple planar defects, such as a quasi-periodic modulation, on the conductance of nanowires are also investigated. For the low temperature region, the lattice thermal conductance reveals a power-law temperature dependence, in agreement with experimental data.

  5. Automatic construction of statistical shape models using deformable simplex meshes with vector field convolution energy.

    PubMed

    Wang, Jinke; Shi, Changfa

    2017-04-24

    In the active shape model framework, principal component analysis (PCA) based statistical shape models (SSMs) are widely employed to incorporate high-level a priori shape knowledge of the structure to be segmented to achieve robustness. A crucial component of building SSMs is to establish shape correspondence between all training shapes, which is a very challenging task, especially in three dimensions. We propose a novel mesh-to-volume registration based shape correspondence establishment method to improve the accuracy and reduce the computational cost. Specifically, we present a greedy algorithm based deformable simplex mesh that uses vector field convolution as the external energy. Furthermore, we develop an automatic shape initialization method by using a Gaussian mixture model based registration algorithm, to derive an initial shape that has high overlap with the object of interest, such that the deformable models can then evolve more locally. We apply the proposed deformable surface model to the application of femur statistical shape model construction to illustrate its accuracy and efficiency. Extensive experiments on ten femur CT scans show that the quality of the constructed femur shape models via the proposed method is much better than that of the classical spherical harmonics (SPHARM) method. Moreover, the proposed method achieves much higher computational efficiency than the SPHARM method. The experimental results suggest that our method can be employed for effective statistical shape model construction.

  6. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  7. Dose convolution filter: Incorporating spatial dose information into tissue response modeling

    SciTech Connect

    Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay

    2010-03-15

    Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.

  8. The Luminous Convolution Model-The light side of dark matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna

    2014-03-01

    We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.

  9. A parametric texture model based on deep convolutional features closely matches texture appearance for humans.

    PubMed

    Wallis, Thomas S A; Funke, Christina M; Ecker, Alexander S; Gatys, Leon A; Wichmann, Felix A; Bethge, Matthias

    2017-10-01

    Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.

  10. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network

    PubMed Central

    Li, Na; Yang, Yongjia

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly. PMID:27803711

  11. Spline functions in convolutional modeling of verapamil bioavailability and bioequivalence. I: conceptual and numerical issues.

    PubMed

    Popović, J

    2006-01-01

    A cubic spline function for describing the verapamil concentration profile, resulting from the verapamil absorption input to be evaluated, has been used. With this method, the knots are taken to be the data points, which has the advantage of being computationally less complex. Because of its inherently low algorhythmic errors, the spline method is less distorted and more suitable for further data analysis than others. The method has been evaluated using simulated verapamil delayed release tablet concentration data containing various degrees of random noise. The accuracy of the method was determined by how well the estimates of input rate and extent represented the true values. It was found that the accuracy of the method was of the same order of magnitude as the noise level of the data. Spline functions in convolutional modeling of verapamil formulation bioavailability and bioequivalence, as shown in the numerical simulation investigation, are very powerful additional tools for assessing the quality of new verapamil formulations in order to ensure that they are of the same quality as already registered formulations of the drug. The development of such models provides the possibility to avoid additional or larger bioequivalence and/or clinical trials and to thus help shorten the investigation time and registration period.

  12. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    PubMed Central

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  13. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    NASA Astrophysics Data System (ADS)

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-09-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion.

  14. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation.

    PubMed

    Wang, Shuo; Zhou, Mu; Liu, Zaiyi; Liu, Zhenyu; Gu, Dongsheng; Zang, Yali; Dong, Di; Gevaert, Olivier; Tian, Jie

    2017-08-01

    Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%. Copyright © 2017. Published by Elsevier B.V.

  15. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    NASA Astrophysics Data System (ADS)

    Starn, J. J.

    2013-12-01

    Particle tracking often is used to generate particle-age distributions that are used as impulse-response functions in convolution. A typical application is to produce groundwater solute breakthrough curves (BTC) at endpoint receptors such as pumping wells or streams. The commonly used semi-analytical particle-tracking algorithm based on the assumption of linear velocity gradients between opposing cell faces is computationally very fast when used in combination with finite-difference models. However, large gradients near pumping wells in regional-scale groundwater-flow models often are not well represented because of cell-size limitations. This leads to inaccurate velocity fields, especially at weak sinks. Accurate analytical solutions for velocity near a pumping well are available, and various boundary conditions can be imposed using image-well theory. Python can be used to embed these solutions into existing semi-analytical particle-tracking codes, thereby maintaining the integrity and quality-assurance of the existing code. Python (and associated scientific computational packages NumPy, SciPy, and Matplotlib) is an effective tool because of its wide ranging capability. Python text processing allows complex and database-like manipulation of model input and output files, including binary and HDF5 files. High-level functions in the language include ODE solvers to solve first-order particle-location ODEs, Gaussian kernel density estimation to compute smooth particle-age distributions, and convolution. The highly vectorized nature of NumPy arrays and functions minimizes the need for computationally expensive loops. A modular Python code base has been developed to compute BTCs using embedded analytical solutions at pumping wells based on an existing well-documented finite-difference groundwater-flow simulation code (MODFLOW) and a semi-analytical particle-tracking code (MODPATH). The Python code base is tested by comparing BTCs with highly discretized synthetic steady

  16. Age-distribution estimation for karst groundwater: Issues of parameterization and complexity in inverse modeling by convolution

    NASA Astrophysics Data System (ADS)

    Long, Andrew J.; Putnam, Larry D.

    2009-10-01

    SummaryConvolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium ( 3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.

  17. Age-distribution estimation for karst groundwater: Issues of parameterization and complexity in inverse modeling by convolution

    USGS Publications Warehouse

    Long, Andrew J.; Putnam, L.D.

    2009-01-01

    Convolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium (3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.

  18. A distribution-free convolution model for background correction of oligonucleotide microarray data

    PubMed Central

    Chen, Zhongxue; McGee, Monnie; Liu, Qingzhong; Kong, Megan; Deng, Youping; Scheuermann, Richard H

    2009-01-01

    Introduction Affymetrix GeneChip® high-density oligonucleotide arrays are widely used in biological and medical research because of production reproducibility, which facilitates the comparison of results between experiment runs. In order to obtain high-level classification and cluster analysis that can be trusted, it is important to perform various pre-processing steps on the probe-level data to control for variability in sample processing and array hybridization. Many proposed preprocessing methods are parametric, in that they assume that the background noise generated by microarray data is a random sample from a statistical distribution, typically a normal distribution. The quality of the final results depends on the validity of such assumptions. Results We propose a Distribution Free Convolution Model (DFCM) to circumvent observed deficiencies in meeting and validating distribution assumptions of parametric methods. Knowledge of array structure and the biological function of the probes indicate that the intensities of mismatched (MM) probes that correspond to the smallest perfect match (PM) intensities can be used to estimate the background noise. Specifically, we obtain the smallest q2 percent of the MM intensities that are associated with the lowest q1 percent PM intensities, and use these intensities to estimate background. Conclusion Using the Affymetrix Latin Square spike-in experiments, we show that the background noise generated by microarray experiments typically is not well modeled by a single overall normal distribution. We further show that the signal is not exponentially distributed, as is also commonly assumed. Therefore, DFCM has better sensitivity and specificity, as measured by ROC curves and area under the curve (AUC) than MAS 5.0, RMA, RMA with no background correction (RMA-noBG), GCRMA, PLIER, and dChip (MBEI) for preprocessing of Affymetrix microarray data. These results hold for two spike-in data sets and one real data set that were

  19. Steady-state modeling of current loss in a post-hole convolute driven by high power magnetically insulated transmission lines

    NASA Astrophysics Data System (ADS)

    Madrid, E. A.; Rose, D. V.; Welch, D. R.; Clark, R. E.; Mostrom, C. B.; Stygar, W. A.; Cuneo, M. E.; Gomez, M. R.; Hughes, T. P.; Pointon, T. D.; Seidel, D. B.

    2013-12-01

    Quasiequilibrium power flow in two radial magnetically insulated transmission lines (MITLs) coupled to a vacuum post-hole convolute is studied at 50TW-200TW using three-dimensional particle-in-cell simulations. The key physical dimensions in the model are based on the ZR accelerator [D. H. McDaniel, et al., Proceedings of 5th International Conference on Dense Z-Pinches, edited by J. Davis (AIP, New York, 2002), p. 23]. The voltages assumed for this study result in electron emission from all cathode surfaces. Electrons emitted from the MITL cathodes upstream of the convolute cause a portion of the MITL current to be carried by an electron sheath. Under the simplifying assumptions made by the simulations, it is found that the transition from the two MITLs to the convolute results in the loss of most of the sheath current to anode structures. The loss is quantified as a function of radius and correlated with Poynting vector stream lines which would be followed by individual electrons. For a fixed MITL-convolute geometry, the current loss, defined to be the difference between the total (i.e. anode) current in the system upstream of the convolute and the current delivered to the load, increases with both operating voltage and load impedance. It is also found that in the absence of ion emission, the convolute is efficient when the load impedance is much less than the impedance of the two parallel MITLs. The effects of space-charge-limited (SCL) ion emission from anode surfaces are considered for several specific cases. Ion emission from anode surfaces in the convolute is found to increase the current loss by a factor of 2-3. When SCL ion emission is allowed from anode surfaces in the MITLs upstream of the convolute, substantially higher current losses are obtained. Note that the results reported here are valid given the spatial resolution used for the simulations.

  20. A gamma-distribution convolution model of (99m)Tc-MIBI thyroid time-activity curves.

    PubMed

    Wesolowski, Carl A; Wanasundara, Surajith N; Wesolowski, Michal J; Erbas, Belkis; Babyn, Paul S

    2016-12-01

    The convolution approach to thyroid time-activity curve (TAC) data fitting with a gamma distribution convolution (GDC) TAC model following bolus intravenous injection is presented and applied to (99m)Tc-MIBI data. The GDC model is a convolution of two gamma distribution functions that simultaneously models the distribution and washout kinetics of the radiotracer. The GDC model was fitted to thyroid region of interest (ROI) TAC data from 1 min per frame (99m)Tc-MIBI image series for 90 min; GDC models were generated for three patients having left and right thyroid lobe and total thyroid ROIs, and were contrasted with washout-only models, i.e., less complete models. GDC model accuracy was tested using 10 Monte Carlo simulations for each clinical ROI. The nine clinical GDC models, obtained from least counting error of counting, exhibited corrected (for 6 parameters) fit errors ranging from 0.998% to 1.82%. The range of all thyroid mean residence times (MRTs) was 212 to 699 min, which from noise injected simulations of each case had an average coefficient of variation of 0.7% and a not statistically significant accuracy error of 0.5% (p = 0.5, 2-sample paired t test). The slowest MRT value (699 min) was from a single thyroid lobe with a tissue diagnosed parathyroid adenoma also seen on scanning as retained marker. The two total thyroid ROIs without substantial pathology had MRT values of 278 and 350 min overlapping a published (99m)Tc-MIBI thyroid MRT value. One combined value and four unrelated washout-only models were tested and exhibited R-squared values for MRT with the GDC, i.e., a more complete concentration model, ranging from 0.0183 to 0.9395. The GDC models had a small enough TAC noise-image misregistration (0.8%) that they have a plausible use as simulations of thyroid activity for querying performance of other models such as washout models, for altered ROI size, noise, administered dose, and image framing rates. Indeed, of the four washout

  1. Closed-form solution of the convolution integral in the magnetic resonance dispersion model for quantitative assessment of angiogenesis.

    PubMed

    Turco, S; Janssen, A J E M; Lavini, C; de la Rosette, J J; Wijkstra, H; Mischi, M

    2014-01-01

    Prostate cancer (PCa) diagnosis and treatment is still limited due to the lack of reliable imaging methods for cancer localization. Based on the fundamental role played by angiogenesis in cancer growth and development, several dynamic contrast enhanced (DCE) imaging methods have been developed to probe tumor angiogenic vasculature. In DCE magnetic resonance imaging (MRI), pharmacokinetic modeling allows estimating quantitative parameters related to the physiology underlying tumor angiogenesis. In particular, novel magnetic resonance dispersion imaging (MRDI) enables quantitative assessment of the microvascular architecture and leakage, by describing the intravascular dispersion kinetics of an extravascular contrast agent with a dispersion model. According to this model, the tissue contrast concentration at each voxel is given by the convolution between the intravascular concentration, described as a Brownian motion process according to the convective-dispersion equation, with the interstitium impulse response, represented by a mono-exponential decay, and describing the contrast leakage in the extravascular space. In this work, an improved formulation of the MRDI method is obtained by providing an analytical solution for the convolution integral present in the dispersion model. The performance of the proposed method was evaluated by means of dedicated simulations in terms of estimation accuracy, precision, and computation time. Moreover, a preliminary clinical validation was carried out in five patients with proven PCa. The proposed method allows for a reduction by about 40% of computation time without any significant change in estimation accuracy and precision, and in the clinical performance.

  2. Correction Approach for Delta Function Convolution Model Fitting of Fluorescence Decay Data in the Case of a Monoexponential Reference Fluorophore.

    PubMed

    Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris

    2015-09-01

    A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.

  3. Experimental validation of a convolution- based ultrasound image formation model using a planar arrangement of micrometer-scale scatterers.

    PubMed

    Gyöngy, Miklós; Makra, Ákos

    2015-06-01

    The shift-invariant convolution model of ultrasound is widely used in the literature, for instance to generate fast simulations of ultrasound images. However, comparison of the resulting simulations with experiments is either qualitative or based on aggregate descriptors such as envelope statistics or spectral components. In the current work, a planar arrangement of 49-μm polystyrene microspheres was imaged using macrophotography and a 4.7-MHz ultrasound linear array. The macrophotograph allowed estimation of the scattering function (SF) necessary for simulations. Using the coefficient of determination R(2) between real and simulated ultrasound images, different estimates of the SF and point spread function (PSF) were tested. All estimates of the SF performed similarly, whereas the best estimate of the PSF was obtained by Hanningwindowing the deconvolution of the real ultrasound image with the SF: this yielded R(2) = 0.43 for the raw simulated image and R(2) = 0.65 for the envelope-detected ultrasound image. R(2) was highly dependent on microsphere concentration, with values of up to 0.99 for regions with scatterers. The results validate the use of the shift-invariant convolution model for the realistic simulation of ultrasound images. However, care needs to be taken in experiments to reduce the relative effects of other sources of scattering such as from multiple reflections, either by increasing the concentration of imaged scatterers or by more careful experimental design.

  4. A comparison study between MLP and convolutional neural network models for character recognition

    NASA Astrophysics Data System (ADS)

    Ben Driss, S.; Soua, M.; Kachouri, R.; Akil, M.

    2017-05-01

    Optical Character Recognition (OCR) systems have been designed to operate on text contained in scanned documents and images. They include text detection and character recognition in which characters are described then classified. In the classification step, characters are identified according to their features or template descriptions. Then, a given classifier is employed to identify characters. In this context, we have proposed the unified character descriptor (UCD) to represent characters based on their features. Then, matching was employed to ensure the classification. This recognition scheme performs a good OCR Accuracy on homogeneous scanned documents, however it cannot discriminate characters with high font variation and distortion.3 To improve recognition, classifiers based on neural networks can be used. The multilayer perceptron (MLP) ensures high recognition accuracy when performing a robust training. Moreover, the convolutional neural network (CNN), is gaining nowadays a lot of popularity for its high performance. Furthermore, both CNN and MLP may suffer from the large amount of computation in the training phase. In this paper, we establish a comparison between MLP and CNN. We provide MLP with the UCD descriptor and the appropriate network configuration. For CNN, we employ the convolutional network designed for handwritten and machine-printed character recognition (Lenet-5) and we adapt it to support 62 classes, including both digits and characters. In addition, GPU parallelization is studied to speed up both of MLP and CNN classifiers. Based on our experimentations, we demonstrate that the used real-time CNN is 2x more relevant than MLP when classifying characters.

  5. Harmonic domain modelling of three phase thyristor-controlled reactors by means of switching vectors and discrete convolutions

    SciTech Connect

    Rico, J.J.; Acha, E.; Miller, T.J.E.

    1996-07-01

    The main objective of this paper is to report on a newly developed three phase Thyristor Controlled Reactor (TCR) model which is based on the use of harmonic switching vectors and discrete convolutions. This model is amenable to direct frequency domain operations and provides a fast and reliable means for assessing 6- and 12-pulse TCR plant performance at harmonic frequencies. The use of alternate time domain and frequency domain representations is avoided as well as the use of FFTs. In this approach, each single phase unit of the TCR is modelled as a voltage-dependent harmonic Norton equivalent where all the harmonics and cross-couplings between harmonics are explicitly shown. This model is suitable for direct incorporation into the harmonic domain frame of reference where all the busbars, phases, harmonics and cross-couplings between harmonics are combined together for a unified iterative solution through a Newton-Raphson technique exhibiting quadratic convergence.

  6. Convolution of Two Series

    ERIC Educational Resources Information Center

    Umar, A.; Yusau, B.; Ghandi, B. M.

    2007-01-01

    In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.

  7. Convolution in Convolution for Network in Network.

    PubMed

    Pang, Yanwei; Sun, Manli; Jiang, Xiaoheng; Li, Xuelong

    2017-03-16

    Network in network (NiN) is an effective instance and an important extension of deep convolutional neural network consisting of alternating convolutional layers and pooling layers. Instead of using a linear filter for convolution, NiN utilizes shallow multilayer perceptron (MLP), a nonlinear function, to replace the linear filter. Because of the powerfulness of MLP and 1 x 1 convolutions in spatial domain, NiN has stronger ability of feature representation and hence results in better recognition performance. However, MLP itself consists of fully connected layers that give rise to a large number of parameters. In this paper, we propose to replace dense shallow MLP with sparse shallow MLP. One or more layers of the sparse shallow MLP are sparely connected in the channel dimension or channel-spatial domain. The proposed method is implemented by applying unshared convolution across the channel dimension and applying shared convolution across the spatial dimension in some computational layers. The proposed method is called convolution in convolution (CiC). The experimental results on the CIFAR10 data set, augmented CIFAR10 data set, and CIFAR100 data set demonstrate the effectiveness of the proposed CiC method.

  8. Distal Convoluted Tubule

    PubMed Central

    Ellison, David H.

    2014-01-01

    The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283

  9. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  10. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems.

    PubMed

    Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-21

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  11. A convolution model for obtaining the response of an ionization chamber in static non standard fields

    SciTech Connect

    Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.; Pardo-Montero, J.; Gomez, F.; Luna-Vega, V.; Sanchez, M.; Lobato, R.

    2012-01-15

    Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of the dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.

  12. General logarithmic image processing convolution.

    PubMed

    Palomares, Jose M; González, Jesús; Ros, Eduardo; Prieto, Alberto

    2006-11-01

    The logarithmic image processing model (LIP) is a robust mathematical framework, which, among other benefits, behaves invariantly to illumination changes. This paper presents, for the first time, two general formulations of the 2-D convolution of separable kernels under the LIP paradigm. Although both formulations are mathematically equivalent, one of them has been designed avoiding the operations which are computationally expensive in current computers. Therefore, this fast LIP convolution method allows to obtain significant speedups and is more adequate for real-time processing. In order to support these statements, some experimental results are shown in Section V.

  13. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  14. Application of Convolution Perfectly Matched Layer in MRTD scattering model for non-spherical aerosol particles and its performance analysis

    NASA Astrophysics Data System (ADS)

    Hu, Shuai; Gao, Taichang; Li, Hao; Yang, Bo; Jiang, Zidong; Liu, Lei; Chen, Ming

    2017-10-01

    The performance of absorbing boundary condition (ABC) is an important factor influencing the simulation accuracy of MRTD (Multi-Resolution Time-Domain) scattering model for non-spherical aerosol particles. To this end, the Convolution Perfectly Matched Layer (CPML), an excellent ABC in FDTD scheme, is generalized and applied to the MRTD scattering model developed by our team. In this model, the time domain is discretized by exponential differential scheme, and the discretization of space domain is implemented by Galerkin principle. To evaluate the performance of CPML, its simulation results are compared with those of BPML (Berenger's Perfectly Matched Layer) and ADE-PML (Perfectly Matched Layer with Auxiliary Differential Equation) for spherical and non-spherical particles, and their simulation errors are analyzed as well. The simulation results show that, for scattering phase matrices, the performance of CPML is better than that of BPML; the computational accuracy of CPML is comparable to that of ADE-PML on the whole, but at scattering angles where phase matrix elements fluctuate sharply, the performance of CPML is slightly better than that of ADE-PML. After orientation averaging process, the differences among the results of different ABCs are reduced to some extent. It also can be found that ABCs have a much weaker influence on integral scattering parameters (such as extinction and absorption efficiencies) than scattering phase matrices, this phenomenon can be explained by the error averaging process in the numerical volume integration.

  15. Constructing Parton Convolution in Effective Field Theory

    SciTech Connect

    Chen, Jiunn-Wei; Ji, Xiangdong

    2001-10-08

    Parton convolution models have been used extensively in describing the sea quarks in the nucleon and explaining quark distributions in nuclei (the EMC effect). From the effective field theory point of view, we construct the parton convolution formalism which has been the underlying conception of all convolution models. We explain the significance of scheme and scale dependence of the auxiliary quantities such as the pion distributions in a nucleon. As an application, we calculate the complete leading nonanalytic chiral contribution to the isovector component of the nucleon sea.

  16. A time scaling approach to develop an in vitro-in vivo correlation (IVIVC) model using a convolution-based technique.

    PubMed

    Costello, Cian; Rossenu, Stefaan; Vermeulen, An; Cleton, Adriaan; Dunne, Adrian

    2011-10-01

    In vitro-in vivo correlation (IVIVC) models prove very useful during drug formulation development, the setting of dissolution specifications and bio-waiver applications following post approval changes. A convolution-based population approach for developing an IVIVC has recently been proposed as an alternative to traditional deconvolution based methods, which pose some statistical concerns. Our aim in this study was to use a time-scaling approach using a convolution-based technique to successfully develop an IVIVC model for a drug with quite different in vitro and in vivo time scales. The in vitro and the in vivo data were longitudinal in nature with considerable between subject variation in the in vivo data. The model was successfully developed and fitted to the data using the NONMEM package. Model utility was assessed by comparing model-predicted plasma concentration-time profiles with the observed in vivo profiles. This comparison met validation criteria for both internal and external predictability as set out by the regulatory authorities. This study demonstrates that a time-scaling approach may prove useful when attempting to develop an IVIVC for data with the aforementioned properties. It also demonstrates that the convolution-based population approach is quite versatile and that it is capable of producing an IVIVC model with a big difference between the in vitro and in vivo time scales.

  17. Real-time hybrid simulation of a complex bridge model with MR dampers using the convolution integral method

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaoshuo; Jig Kim, Sung; Plude, Shelley; Christenson, Richard

    2013-10-01

    Magneto-rheological (MR) fluid dampers can be used to reduce the traffic induced vibration in highway bridges and protect critical structural components from fatigue. Experimental verification is needed to verify the applicability of the MR dampers for this purpose. Real-time hybrid simulation (RTHS), where the MR dampers are physically tested and dynamically linked to a numerical model of the highway bridge and truck traffic, provides an efficient and effective means to experimentally examine the efficacy of MR dampers for fatigue protection of highway bridges. In this paper a complex highway bridge model with 263 178 degrees-of-freedom under truck loading is tested using the proposed convolution integral (CI) method of RTHS for a semiactive structural control strategy employing two large-scale 200 kN MR dampers. The formation of RTHS using the CI method is first presented, followed by details of the various components in the RTHS and a description of the implementation of the CI method for this particular test. The experimental results confirm the practicability of the CI method for conducting RTHS of complex systems.

  18. The influence of noise exposure on the parameters of a convolution model of the compound action potential.

    PubMed

    Chertoff, M E; Lichtenhan, J T; Tourtillott, B M; Esau, K S

    2008-10-01

    The influence of noise exposure on the parameters of a convolution model of the compound action potential (CAP) was examined. CAPs were recorded in normal-hearing gerbils and in gerbils exposed to a 117 dB SPL 8 kHz band of noise for various durations. The CAPs were fitted with an analytic CAP to obtain the parameters representing the number of nerve fibers (N), the probability density function [P(t)] from a population of nerve fibers, and the single-unit waveform [U(t)]. The results showed that the analytic CAP fitted the physiologic CAPs well with correlations of approximately 0.90. A subsequent analysis using hierarchical linear modeling quantified the change in the parameters as a function of both signal level and hearing threshold. The results showed that noise exposure caused some of the parameter-level functions to simply shift along the signal level axis in proportion to the amount of hearing loss, whereas others shifted along the signal level axis and steepened. Significant changes occurred in the U(t) parameters, but they were not related to hearing threshold. These results suggest that noise exposure alters the physiology underlying the CAP, some of which can be explained by a simple lack of gain, whereas others may not.

  19. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  20. Radio Model-free Noise Reduction of Radio Transmissions with Convolutional Autoencoders

    DTIC Science & Technology

    2016-09-01

    arbitrary forms of signal contamination via unsupervised machine learning. In real-world situations, electromagnetic transmissions are affected by the...detrimental effects on the signal. We identify a machine learning method for removing noise contamination of signals without advance knowledge of the...signal model, noise model, or exact propagation model. We accomplish this by using unsupervised machine learning implemented with regularized

  1. Conductivity depth imaging of Airborne Electromagnetic data with double pulse transmitting current based on model fusion

    NASA Astrophysics Data System (ADS)

    Li, Jing; Dou, Mei; Lu, Yiming; Peng, Cong; Yu, Zining; Zhu, Kaiguang

    2017-01-01

    The airborne electromagnetic (AEM) systems have been used traditionally in mineral exploration. Typically the system transmits a single pulse waveform to detect conductive anomaly. Conductivity-depth imaging (CDI) of data is generally applied in identifying conductive targets. A CDI algorithm with double-pulse transmitting current based on model fusion is developed. The double-pulse is made up of a half-sine pulse of high power and a trapezoid pulse of low power. This CDI algorithm presents more shallow information than traditional CDI with a single pulse. The electromagnetic response with double-pulse transmitting current is calculated by linear convolution based on forward modeling. The CDI results with half-sine and trapezoid pulse are obtained by look-up table method, and the two results are fused to form a double-pulse conductivity-depth imaging result. This makes it possible to obtain accurate conductivity and depth. Tests on synthetic data demonstrate that CDI algorithm with double-pulse transmitting current based on model fusion maps a wider range of conductivities and does a better job compared with CDI with a single pulse transmitting current in reflecting the whole geological conductivity changes.

  2. NONSTATIONARY SPATIAL MODELING OF ENVIRONMENTAL DATA USING A PROCESS CONVOLUTION APPROACH

    EPA Science Inventory

    Traditional approaches to modeling spatial processes involve the specification of the covariance structure of the field. Although such methods are straightforward to understand and effective in some situations, there are often problems in incorporating non-stationarity and in ma...

  3. NONSTATIONARY SPATIAL MODELING OF ENVIRONMENTAL DATA USING A PROCESS CONVOLUTION APPROACH

    EPA Science Inventory

    Traditional approaches to modeling spatial processes involve the specification of the covariance structure of the field. Although such methods are straightforward to understand and effective in some situations, there are often problems in incorporating non-stationarity and in ma...

  4. Hypertrophy in the Distal Convoluted Tubule of an 11β-Hydroxysteroid Dehydrogenase Type 2 Knockout Model.

    PubMed

    Hunter, Robert W; Ivy, Jessica R; Flatman, Peter W; Kenyon, Christopher J; Craigie, Eilidh; Mullins, Linda J; Bailey, Matthew A; Mullins, John J

    2015-07-01

    Na(+) transport in the renal distal convoluted tubule (DCT) by the thiazide-sensitive NaCl cotransporter (NCC) is a major determinant of total body Na(+) and BP. NCC-mediated transport is stimulated by aldosterone, the dominant regulator of chronic Na(+) homeostasis, but the mechanism is controversial. Transport may also be affected by epithelial remodeling, which occurs in the DCT in response to chronic perturbations in electrolyte homeostasis. Hsd11b2(-/-) mice, which lack the enzyme 11β-hydroxysteroid dehydrogenase type 2 (11βHSD2) and thus exhibit the syndrome of apparent mineralocorticoid excess, provided an ideal model in which to investigate the potential for DCT hypertrophy to contribute to Na(+) retention in a hypertensive condition. The DCTs of Hsd11b2(-/-) mice exhibited hypertrophy and hyperplasia and the kidneys expressed higher levels of total and phosphorylated NCC compared with those of wild-type mice. However, the striking structural and molecular phenotypes were not associated with an increase in the natriuretic effect of thiazide. In wild-type mice, Hsd11b2 mRNA was detected in some tubule segments expressing Slc12a3, but 11βHSD2 and NCC did not colocalize at the protein level. Thus, the phosphorylation status of NCC may not necessarily equate to its activity in vivo, and the structural remodeling of the DCT in the knockout mouse may not be a direct consequence of aberrant corticosteroid signaling in DCT cells. These observations suggest that the conventional concept of mineralocorticoid signaling in the DCT should be revised to recognize the complexity of NCC regulation by corticosteroids.

  5. Hypertrophy in the Distal Convoluted Tubule of an 11β-Hydroxysteroid Dehydrogenase Type 2 Knockout Model

    PubMed Central

    Ivy, Jessica R.; Flatman, Peter W.; Kenyon, Christopher J.; Craigie, Eilidh; Mullins, Linda J.; Bailey, Matthew A.; Mullins, John J.

    2015-01-01

    Na+ transport in the renal distal convoluted tubule (DCT) by the thiazide-sensitive NaCl cotransporter (NCC) is a major determinant of total body Na+ and BP. NCC-mediated transport is stimulated by aldosterone, the dominant regulator of chronic Na+ homeostasis, but the mechanism is controversial. Transport may also be affected by epithelial remodeling, which occurs in the DCT in response to chronic perturbations in electrolyte homeostasis. Hsd11b2−/− mice, which lack the enzyme 11β-hydroxysteroid dehydrogenase type 2 (11βHSD2) and thus exhibit the syndrome of apparent mineralocorticoid excess, provided an ideal model in which to investigate the potential for DCT hypertrophy to contribute to Na+ retention in a hypertensive condition. The DCTs of Hsd11b2−/− mice exhibited hypertrophy and hyperplasia and the kidneys expressed higher levels of total and phosphorylated NCC compared with those of wild-type mice. However, the striking structural and molecular phenotypes were not associated with an increase in the natriuretic effect of thiazide. In wild-type mice, Hsd11b2 mRNA was detected in some tubule segments expressing Slc12a3, but 11βHSD2 and NCC did not colocalize at the protein level. Thus, the phosphorylation status of NCC may not necessarily equate to its activity in vivo, and the structural remodeling of the DCT in the knockout mouse may not be a direct consequence of aberrant corticosteroid signaling in DCT cells. These observations suggest that the conventional concept of mineralocorticoid signaling in the DCT should be revised to recognize the complexity of NCC regulation by corticosteroids. PMID:25349206

  6. Blind Source Deconvolution Based on Frequency Domain Convolution Model Under Highly Reverberant Environments

    NASA Astrophysics Data System (ADS)

    Koya, Takeshi; Ishibashi, Takaaki; Shiratsuchi, Hiroshi; Gotanda, Hiromu

    In order to solve the blind source deconvolution, a convolved mixing process in the time domain is often transformed into an instantaneous mixing model in the frequency domain. However, the model is only an approximated one and thus does not work effectively under highly reverberant environments. By dividing the impulse response properly, Servière has precisely transformed the time-domain convolved mixture to a frequency-domain convolved mixture and has proposed a new FDICA approach available under high reverberation. In the approach, however, the permuation and scaling problems are unresolved as well as the distortion due to whitening. In the present paper, an improved approach without such problems is proposed and is confirmed to be valid in a highly reverberant real environment

  7. On models of double porosity poroelastic media

    NASA Astrophysics Data System (ADS)

    Boutin, Claude; Royer, Pascale

    2015-12-01

    This paper focuses on the modelling of fluid-filled poroelastic double porosity media under quasi-static and dynamic regimes. The double porosity model is derived from a two-scale homogenization procedure, by considering a medium locally characterized by blocks of poroelastic Biot microporous matrix and a surrounding system of fluid-filled macropores or fractures. The derived double porosity description is a two-pressure field poroelastic model with memory and viscoelastic effects. These effects result from the `time-dependent' interaction between the pressure fields in the two pore networks. It is shown that this homogenized double porosity behaviour arises when the characteristic time of consolidation in the microporous domain is of the same order of magnitude as the macroscopic characteristic time of transient regime. Conversely, single porosity behaviours occur when both timescales are clearly distinct. Moreover, it is established that the phenomenological approaches that postulate the coexistence of two pressure fields in `instantaneous' interaction only describe media with two pore networks separated by an interface flow barrier. Hence, they fail at predicting and reproducing the behaviour of usual double porosity media. Finally, the results are illustrated for the case of stratified media.

  8. SU-E-T-328: The Volume Effect Correction of Probe-Type Dosimetric Detectors Derived From the Convolution Model

    SciTech Connect

    Looe, HK; Poppe, B; Harder, D

    2014-06-01

    Purpose: To derive and introduce a new correction factor kV, the “volume effect correction factor”, that accounts for not only the dose averaging over the detector's sensitive volume but also the secondary electron generation and transport inclusive of the disturbance of the field of secondary electrons within the detector. Materials and Methods: Mathematical convolutions and Fourier's convolution theorem have been used. Monte Carlo simulations of photon pencil beams were performed using EGSnrc. Detector constructions were adapted from manufacturers' information. Results: For the calculation of kV, the three basic convolution kernels have to be taken into account: the dose deposition kernel KD(x) (fluence to dose), the photon fluence response kernel KM(x) (photon fluence to detector signal) and the “dose response kernel” K(x) (dose to detector signal). K(x) is calculated from FT[K(x)] = [1/sqrt(2”)]FT[KM(x)]/FT[KD(x)], where the magnitude of kV(x) can be thereby calculated for arbitrary photon beam profiles and the areanormalized K(x). Conclusions: n order to take into account for the dimensions of dosimetric detectors in narrow photon beams, the “volume effect correction factor” kV has been introduced into the fundamental equation of probe-type dosimetry, and the convolution method has proven to be a method for the derivation of its numerical values. For narrow photon beams, whose width is comparable to the secondary electron ranges, kV can reach very high values, but it can be shown that the signals of small diamond detectors are well representing the absorbed dose to water averaged over the detector volume.

  9. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  10. Fast optical signals in the sensorimotor cortex: General Linear Convolution Model applied to multiple source-detector distance-based data.

    PubMed

    Chiarelli, Antonio Maria; Romani, Gian Luca; Merla, Arcangelo

    2014-01-15

    In this study, we applied the General Linear Convolution Model to detect fast optical signals (FOS) in the somatosensory cortex, and to study their dependence on the source-detector separation distance (2.0 to 3.5 cm) and irradiated light wavelength (690 and 830 nm). We modeled the impulse response function as a rectangular function that lasted 30 ms, with variable time delay with respect to the stimulus onset. The model was tested in a cohort of 20 healthy volunteers who underwent supra-motor threshold electrical stimulation of the median nerve. The impulse response function quantified the time delay for the maximal response at 70 ms to 110 ms after stimulus onset, in agreement with classical somatosensory-evoked potentials in the literature, previous optical imaging studies based on a grand-average approach, and grand-average based processing. Phase signals at longer wavelength were used to identify FOS for all the source-detector separation distances, but the shortest one. Intensity signals only detected FOS at the greatest distance; i.e., for the largest channel depth. There was no activation for the shorter wavelength light. Correlational analysis between the phase and intensity of FOS further confirmed diffusive rather than optical absorption changes associated with neuronal activity in the activated cortical volume. Our study demonstrates the reliability of our method based on the General Linear Convolution Model for the detection of fast cortical activation through FOS. © 2013 Elsevier Inc. All rights reserved.

  11. A double pendulum model of tennis strokes

    NASA Astrophysics Data System (ADS)

    Cross, Rod

    2011-05-01

    The physics of swinging a tennis racquet is examined by modeling the forearm and the racquet as a double pendulum. We consider differences between a forehand and a serve, and show how they differ from the swing of a bat and a golf club. It is also shown that the swing speed of a racquet, like that of a bat or a club, depends primarily on its moment of inertia rather than on its mass.

  12. Influence of convolution filtering on coronary plaque attenuation values: observations in an ex vivo model of multislice computed tomography coronary angiography.

    PubMed

    Cademartiri, Filippo; La Grutta, Ludovico; Runza, Giuseppe; Palumbo, Alessandro; Maffei, Erica; Mollet, Nico R A; Bartolotta, Tommaso Vincenzo; Somers, Pamela; Knaapen, Michiel; Verheye, Stefan; Midiri, Massimo; Hamers, Ronald; Bruining, Nico

    2007-07-01

    Attenuation variability (measured in Hounsfield Units, HU) of human coronary plaques using multislice computed tomography (MSCT) was evaluated in an ex vivo model with increasing convolution kernels. MSCT was performed in seven ex vivo left coronary arteries sunk into oil followingthe instillation of saline (1/infinity) and a 1/50 solution of contrast material (400 mgI/ml iomeprol). Scan parameters were: slices/collimation, 16/0.75 mm; rotation time, 375 ms. Four convolution kernels were used: b30f-smooth, b36f-medium smooth, b46f-medium and b60f-sharp. An experienced radiologist scored for the presence of plaques and measured the attenuation in lumen, calcified and noncalcified plaques and the surrounding oil. The results were compared by the ANOVA test and correlated with Pearson's test. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. The mean attenuation values were significantly different between the four filters (p < 0.0001) in each structure with both solutions. After clustering for the filter, all of the noncalcified plaque values (20.8 +/- 39.1, 14.2 +/- 35.8, 14.0 +/- 32.0, 3.2 +/- 32.4 HU with saline; 74.7 +/- 66.6, 68.2 +/- 63.3, 66.3 +/- 66.5, 48.5 +/- 60.0 HU in contrast solution) were significantly different, with the exception of the pair b36f-b46f, for which a moderate-high correlation was generally found. Improved SNRs and CNRs were achieved by b30f and b46f. The use of different convolution filters significantly modifief the attenuation values, while sharper filtering increased the calcified plaque attenuation and reduced the noncalcified plaque attenuation.

  13. Interpolation by two-dimensional cubic convolution

    NASA Astrophysics Data System (ADS)

    Shi, Jiazheng; Reichenbach, Stephen E.

    2003-08-01

    This paper presents results of image interpolation with an improved method for two-dimensional cubic convolution. Convolution with a piecewise cubic is one of the most popular methods for image reconstruction, but the traditional approach uses a separable two-dimensional convolution kernel that is based on a one-dimensional derivation. The traditional, separable method is sub-optimal for the usual case of non-separable images. The improved method in this paper implements the most general non-separable, two-dimensional, piecewise-cubic interpolator with constraints for symmetry, continuity, and smoothness. The improved method of two-dimensional cubic convolution has three parameters that can be tuned to yield maximal fidelity for specific scene ensembles characterized by autocorrelation or power-spectrum. This paper illustrates examples for several scene models (a circular disk of parametric size, a square pulse with parametric rotation, and a Markov random field with parametric spatial detail) and actual images -- presenting the optimal parameters and the resulting fidelity for each model. In these examples, improved two-dimensional cubic convolution is superior to several other popular small-kernel interpolation methods.

  14. Correction for scatter and septal penetration using convolution subtraction methods and model-based compensation in 123I brain SPECT imaging-a Monte Carlo study.

    PubMed

    Larsson, Anne; Ljungberg, Michael; Mo, Susanna Jakobson; Riklund, Katrine; Johansson, Lennart

    2006-11-21

    Scatter and septal penetration deteriorate contrast and quantitative accuracy in single photon emission computed tomography (SPECT). In this study four different correction techniques for scatter and septal penetration are evaluated for 123I brain SPECT. One of the methods is a form of model-based compensation which uses the effective source scatter estimation (ESSE) for modelling scatter, and collimator-detector response (CDR) including both geometric and penetration components. The other methods, which operate on the 2D projection images, are convolution scatter subtraction (CSS) and two versions of transmission dependent convolution subtraction (TDCS), one of them proposed by us. This method uses CSS for correction for septal penetration, with a separate kernel, and TDCS for scatter correction. The corrections are evaluated for a dopamine transporter (DAT) study and a study of the regional cerebral blood flow (rCBF), performed with 123I. The images are produced using a recently developed Monte Carlo collimator routine added to the program SIMIND which can include interactions in the collimator. The results show that the method included in the iterative reconstruction is preferable to the other methods and that the new TDCS version gives better results compared with the other 2D methods.

  15. Correction for scatter and septal penetration using convolution subtraction methods and model-based compensation in 123I brain SPECT imaging—a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Larsson, Anne; Ljungberg, Michael; Jakobson Mo, Susanna; Riklund, Katrine; Johansson, Lennart

    2006-11-01

    Scatter and septal penetration deteriorate contrast and quantitative accuracy in single photon emission computed tomography (SPECT). In this study four different correction techniques for scatter and septal penetration are evaluated for 123I brain SPECT. One of the methods is a form of model-based compensation which uses the effective source scatter estimation (ESSE) for modelling scatter, and collimator-detector response (CDR) including both geometric and penetration components. The other methods, which operate on the 2D projection images, are convolution scatter subtraction (CSS) and two versions of transmission dependent convolution subtraction (TDCS), one of them proposed by us. This method uses CSS for correction for septal penetration, with a separate kernel, and TDCS for scatter correction. The corrections are evaluated for a dopamine transporter (DAT) study and a study of the regional cerebral blood flow (rCBF), performed with 123I. The images are produced using a recently developed Monte Carlo collimator routine added to the program SIMIND which can include interactions in the collimator. The results show that the method included in the iterative reconstruction is preferable to the other methods and that the new TDCS version gives better results compared with the other 2D methods.

  16. Molecular graph convolutions: moving beyond fingerprints

    PubMed Central

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-01-01

    Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  17. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  18. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  19. Double multiple streamtube model with recent improvements

    SciTech Connect

    Paraschivoiu, I.; Delclaux, F.

    1983-05-01

    The objective of the present paper is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.

  20. Double multiple streamtube model with recent improvements

    SciTech Connect

    Paraschivoiu, I.; Delclaux, F.

    1983-05-01

    The objective is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.

  1. Understanding deep convolutional networks

    PubMed Central

    Mallat, Stéphane

    2016-01-01

    Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183

  2. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  3. Influence of convolution filtering on coronary plaque attenuation values: observations in an ex vivo model of multislice computed tomography coronary angiography

    PubMed Central

    La Grutta, Ludovico; Runza, Giuseppe; Palumbo, Alessandro; Maffei, Erica; Mollet, Nico RA; Bartolotta, Tommaso Vincenzo; Somers, Pamela; Knaapen, Michiel; Verheye, Stefan; Midiri, Massimo; Hamers, Ronald; Bruining, Nico

    2007-01-01

    Attenuation variability (measured in Hounsfield Units, HU) of human coronary plaques using multislice computed tomography (MSCT) was evaluated in an ex vivo model with increasing convolution kernels. MSCT was performed in seven ex vivo left coronary arteries sunk into oil followingthe instillation of saline (1/∞) and a 1/50 solution of contrast material (400 mgI/ml iomeprol). Scan parameters were: slices/collimation, 16/0.75 mm; rotation time, 375 ms. Four convolution kernels were used: b30f-smooth, b36f-medium smooth, b46f-medium and b60f-sharp. An experienced radiologist scored for the presence of plaques and measured the attenuation in lumen, calcified and noncalcified plaques and the surrounding oil. The results were compared by the ANOVA test and correlated with Pearson’s test. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. The mean attenuation values were significantly different between the four filters (p < 0.0001) in each structure with both solutions. After clustering for the filter, all of the noncalcified plaque values (20.8 ± 39.1, 14.2 ± 35.8, 14.0 ± 32.0, 3.2 ± 32.4 HU with saline; 74.7 ± 66.6, 68.2 ± 63.3, 66.3 ± 66.5, 48.5 ± 60.0 HU in contrast solution) were significantly different, with the exception of the pair b36f–b46f, for which a moderate-high correlation was generally found. Improved SNRs and CNRs were achieved by b30f and b46f. The use of different convolution filters significantly modifief the attenuation values, while sharper filtering increased the calcified plaque attenuation and reduced the noncalcified plaque attenuation. PMID:17245583

  4. Modeling interconnect corners under double patterning misalignment

    NASA Astrophysics Data System (ADS)

    Hyun, Daijoon; Shin, Youngsoo

    2016-03-01

    Publisher's Note: This paper, originally published on March 16th, was replaced with a corrected/revised version on March 28th. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Interconnect corners should accurately reflect the effect of misalingment in LELE double patterning process. Misalignment is usually considered separately from interconnect structure variations; this incurs too much pessimism and fails to reflect a large increase in total capacitance for asymmetric interconnect structure. We model interconnect corners by taking account of misalignment in conjunction with interconnect structure variations; we also characterize misalignment effect more accurately by handling metal pitch at both sides of a target metal independently. Identifying metal space at both sides of a target metal.

  5. Double diffusivity model under stochastic forcing

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Amit K.; Aifantis, Elias C.

    2017-05-01

    The "double diffusivity" model was proposed in the late 1970s, and reworked in the early 1980s, as a continuum counterpart to existing discrete models of diffusion corresponding to high diffusivity paths, such as grain boundaries and dislocation lines. It was later rejuvenated in the 1990s to interpret experimental results on diffusion in polycrystalline and nanocrystalline specimens where grain boundaries and triple grain boundary junctions act as high diffusivity paths. Technically, the model pans out as a system of coupled Fick-type diffusion equations to represent "regular" and "high" diffusivity paths with "source terms" accounting for the mass exchange between the two paths. The model remit was extended by analogy to describe flow in porous media with double porosity, as well as to model heat conduction in media with two nonequilibrium local temperature baths, e.g., ion and electron baths. Uncoupling of the two partial differential equations leads to a higher-ordered diffusion equation, solutions of which could be obtained in terms of classical diffusion equation solutions. Similar equations could also be derived within an "internal length" gradient (ILG) mechanics formulation applied to diffusion problems, i.e., by introducing nonlocal effects, together with inertia and viscosity, in a mechanics based formulation of diffusion theory. While being remarkably successful in studies related to various aspects of transport in inhomogeneous media with deterministic microstructures and nanostructures, its implications in the presence of stochasticity have not yet been considered. This issue becomes particularly important in the case of diffusion in nanopolycrystals whose deterministic ILG-based theoretical calculations predict a relaxation time that is only about one-tenth of the actual experimentally verified time scale. This article provides the "missing link" in this estimation by adding a vital element in the ILG structure, that of stochasticity, that takes into

  6. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  7. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.

  8. Two-dimensional cubic convolution.

    PubMed

    Reichenbach, Stephen E; Geng, Frank

    2003-01-01

    The paper develops two-dimensional (2D), nonseparable, piecewise cubic convolution (PCC) for image interpolation. Traditionally, PCC has been implemented based on a one-dimensional (1D) derivation with a separable generalization to two dimensions. However, typical scenes and imaging systems are not separable, so the traditional approach is suboptimal. We develop a closed-form derivation for a two-parameter, 2D PCC kernel with support [-2,2] x [-2,2] that is constrained for continuity, smoothness, symmetry, and flat-field response. Our analyses, using several image models, including Markov random fields, demonstrate that the 2D PCC yields small improvements in interpolation fidelity over the traditional, separable approach. The constraints on the derivation can be relaxed to provide greater flexibility and performance.

  9. Towards dropout training for convolutional neural networks.

    PubMed

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  11. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  12. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  13. Dealiased convolutions for pseudospectral simulations

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2011-12-01

    Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.

  14. Determinate-state convolutional codes

    NASA Technical Reports Server (NTRS)

    Collins, O.; Hizlan, M.

    1991-01-01

    A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.

  15. Convoluted accommodation structures in folded rocks

    NASA Astrophysics Data System (ADS)

    Dodwell, T. J.; Hunt, G. W.

    2012-10-01

    A simplified variational model for the formation of convoluted accommodation structures, as seen in the hinge zones of larger-scale geological folds, is presented. The model encapsulates some important and intriguing nonlinear features, notably: infinite critical loads, formation of plastic hinges, and buckling on different length-scales. An inextensible elastic beam is forced by uniform overburden pressure and axial load into a V-shaped geometry dictated by formation of a plastic hinge. Using variational methods developed by Dodwell et al., upon which this paper leans heavily, energy minimisation leads to representation as a fourth-order nonlinear differential equation with free boundary conditions. Equilibrium solutions are found using numerical shooting techniques. Under the Maxwell stability criterion, it is recognised that global energy minimisers can exist with convoluted physical shapes. For such solutions, parallels can be drawn with some of the accommodation structures seen in exposed escarpments of real geological folds.

  16. Double Higgs boson production in the models with isotriplets

    SciTech Connect

    Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.

    2015-12-15

    The enhancement of double Higgs boson production in the extensions of the Standard Model with extra isotriplets is studied. It is found that in see-saw type II model decays of new heavy Higgs can contribute to the double Higgs production cross section as much as Standard Model channels. In Georgi–Machacek model the cross section can be much larger since the custodial symmetry is preserved and the strongest limitation on triplet parameters is removed.

  17. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  18. Generalized Valon Model for Double Parton Distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-06-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  19. 21. INTERIOR, DOUBLE STAIRWAY LEADING TO MODEL HALL, DETAIL OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. INTERIOR, DOUBLE STAIRWAY LEADING TO MODEL HALL, DETAIL OF ONE FLIGHT (5 x 7 negative; 8 x 10 print) - Patent Office Building, Bounded by Seventh, Ninth, F & G Streets, Northwest, Washington, District of Columbia, DC

  20. Radial Structure Scaffolds Convolution Patterns of Developing Cerebral Cortex

    PubMed Central

    Razavi, Mir Jalil; Zhang, Tuo; Chen, Hanbo; Li, Yujie; Platt, Simon; Zhao, Yu; Guo, Lei; Hu, Xiaoping; Wang, Xianqiao; Liu, Tianming

    2017-01-01

    Commonly-preserved radial convolution is a prominent characteristic of the mammalian cerebral cortex. Endeavors from multiple disciplines have been devoted for decades to explore the causes for this enigmatic structure. However, the underlying mechanisms that lead to consistent cortical convolution patterns still remain poorly understood. In this work, inspired by prior studies, we propose and evaluate a plausible theory that radial convolution during the early development of the brain is sculptured by radial structures consisting of radial glial cells (RGCs) and maturing axons. Specifically, the regionally heterogeneous development and distribution of RGCs controlled by Trnp1 regulate the convex and concave convolution patterns (gyri and sulci) in the radial direction, while the interplay of RGCs' effects on convolution and axons regulates the convex (gyral) convolution patterns. This theory is assessed by observations and measurements in literature from multiple disciplines such as neurobiology, genetics, biomechanics, etc., at multiple scales to date. Particularly, this theory is further validated by multimodal imaging data analysis and computational simulations in this study. We offer a versatile and descriptive study model that can provide reasonable explanations of observations, experiments, and simulations of the characteristic mammalian cortical folding. PMID:28860983

  1. Radial Structure Scaffolds Convolution Patterns of Developing Cerebral Cortex.

    PubMed

    Razavi, Mir Jalil; Zhang, Tuo; Chen, Hanbo; Li, Yujie; Platt, Simon; Zhao, Yu; Guo, Lei; Hu, Xiaoping; Wang, Xianqiao; Liu, Tianming

    2017-01-01

    Commonly-preserved radial convolution is a prominent characteristic of the mammalian cerebral cortex. Endeavors from multiple disciplines have been devoted for decades to explore the causes for this enigmatic structure. However, the underlying mechanisms that lead to consistent cortical convolution patterns still remain poorly understood. In this work, inspired by prior studies, we propose and evaluate a plausible theory that radial convolution during the early development of the brain is sculptured by radial structures consisting of radial glial cells (RGCs) and maturing axons. Specifically, the regionally heterogeneous development and distribution of RGCs controlled by Trnp1 regulate the convex and concave convolution patterns (gyri and sulci) in the radial direction, while the interplay of RGCs' effects on convolution and axons regulates the convex (gyral) convolution patterns. This theory is assessed by observations and measurements in literature from multiple disciplines such as neurobiology, genetics, biomechanics, etc., at multiple scales to date. Particularly, this theory is further validated by multimodal imaging data analysis and computational simulations in this study. We offer a versatile and descriptive study model that can provide reasonable explanations of observations, experiments, and simulations of the characteristic mammalian cortical folding.

  2. Finite difference modeling of ultrasonic propagation (coda waves) in digital porous cores with un-split convolutional PML and rotated staggered grid

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Fu, Li-Yun; Zhang, Luxin; Wei, Wei; Guan, Xizhu

    2014-05-01

    Ultrasonic wave propagation in heterogeneous porous cores under laboratory studies is an extremely complex process involved with strong scattering by microscale heterogeneous structures. The resulting coda waves, as an index to measure scattering attenuation, are recorded as continuous waveforms in the tail portion of wavetrains. Because of the contamination of reflections from the side ends and reverberations between the sample surfaces, it is difficult to extract pure coda waves from ultrasonic measurements for the estimation of the P- and S-coda attenuation quality factors. Comparisons of numerical and experimental ultrasonic wave propagation in heterogeneous porous cores can give important insight into understanding the effect of boundary reflections on the P- and S-codas in the laboratory experiment. It challenges numerical modeling techniques by three major issues: the creation of a digital core model to map heterogeneous rock properties in detail, the perfect simulation with a controllable and accurate absorbing boundary, and overcoming the numerical dispersions resulting from high-frequency propagation and strong heterogeneity in material. A rotated staggered-grid finite-difference method of Biot's poroelastic equations is presented with an unsplit convolutional perfectly matched layer (CPML) absorbing boundary to simulate poroelastic wave propagation in isotropic and fluid-saturated porous media. The contamination of boundary reflections on coda waves is controlled by the CPML absorbing coefficients for the comparison between numerical and experimental ultrasonic waveforms. Numerical examples with a digital porous core demonstrate that the boundary reflections contaminate coda waves seriously, causing much larger coda quality factors and thus underestimating scattering attenuation.

  3. Frequency domain convolution for SCANSAR

    NASA Astrophysics Data System (ADS)

    Cantraine, Guy; Dendal, Didier

    1994-12-01

    Starting from basic signals expressions, the rigorous formulation of frequency domain convolution is demonstrated, in general and impulse terms, including antenna patterns and squint angle. The major differences with conventional algorithms are discussed and theoretical concepts clarified. In a second part, the philosophy of advanced SAR algorithms is compared with that of a SCANSAR observation (several subswaths). It is proved that a general impulse response can always be written as the product of three factors, i.e., a phasor, an antenna coefficient, and a migration expression, and that the details of antenna effects can be ignored in the usual SAR system, but not the range migration (the situation is reversed in a SCANSAR reconstruction scheme). In a next step, some possible inverse filter kernels (the matched filter, the true inverse filter, ...) for general SAR or SCANSAR mode reconstructions, are compared. By adopting a noise corrupted model of data, we get the corresponding Wiener filter, the major interest of which is to avoid all divergence risk. Afterwards, the vocable `a class of filter' is introduced and summarized by a parametric formulation. Lastly, the homogeneity of the reconstruction, with a noncyclic fast Fourier transform deconvolution is studied by comparing peak responses according to the burst location. The more homogeneous sensitivity of the Wiener filter, with a stepper fall when the target begins to go outside the antenna pattern, is confirmed. A linear optimal merging of adjacent looks (in azimuth) minimizing the rms noise is also presented, as well as consideration about squint ambiguity.

  4. Convolution and validation of in vitro-in vivo correlation of water-insoluble sustained-release drug (domperidone) by first-order pharmacokinetic one-compartmental model fitting equation.

    PubMed

    Bose, Anirbandeep; Wui, Wong Tin

    2013-09-01

    The experimental study presents a brief and comprehensive perspective on the methods of developing a Level A in vitro-in vivo correlation (IVIVC) for extended oral dosage forms of water-insoluble drug domperidone. The study also evaluates the validity and predictability of in vitro-in vivo correlation using the convolution technique by one-compartmental first-order kinetic equation. The IVIVC can be substituted as a surrogate for in vivo bioavailability study for the documentation of bioequivalence studies as mandatory from any regulatory authorities. The in vitro drug release studies for different formulations (fast, moderate, slow) were conducted in different dissolution mediums. The f (2) metric (similarity factor) was used to analyze the dissolution data for determination of the most discriminating dissolution method. The in vivo pharmacokinetics parameters of all the formulations were determined by using liquid chromatography mass spectrometry (LC/MS) methods. The absorption rate constant and percentage of absorption of drugs at different time intervals were calculated by using data convolution. In vitro drug release and in vivo absorption correlation were found to be a linear correlation model, which was developed by using percent absorbed drug release versus percent drug dissolved from the three formulations. Internal and external validation was performed to validate the IVIVC. Predicted domperidone concentrations were obtained by convolution technique using first-order one-compartmental fitting equation. Prediction errors estimated for C (max) and AUC (0-infinity) were found to be within the limit.

  5. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2017-04-27

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  6. Experimental study of current loss and plasma formation in the Z machine post-hole convolute

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.

    2017-01-01

    The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.

  7. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  8. A Unimodal Model for Double Observer Distance Sampling Surveys

    PubMed Central

    Becker, Earl F.; Christ, Aaron M.

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied. PMID:26317984

  9. Inhibitor Discovery by Convolution ABPP.

    PubMed

    Chandrasekar, Balakumaran; Hong, Tram Ngoc; van der Hoorn, Renier A L

    2017-01-01

    Activity-based protein profiling (ABPP) has emerged as a powerful proteomic approach to study the active proteins in their native environment by using chemical probes that label active site residues in proteins. Traditionally, ABPP is classified as either comparative or competitive ABPP. In this protocol, we describe a simple method called convolution ABPP, which takes benefit from both the competitive and comparative ABPP. Convolution ABPP allows one to detect if a reduced signal observed during comparative ABPP could be due to the presence of inhibitors. In convolution ABPP, the proteomes are analyzed by comparing labeling intensities in two mixed proteomes that were labeled either before or after mixing. A reduction of labeling in the mix-and-label sample when compared to the label-and-mix sample indicates the presence of an inhibitor excess in one of the proteomes. This method is broadly applicable to detect inhibitors in proteomes against any proteome containing protein activities of interest. As a proof of concept, we applied convolution ABPP to analyze secreted proteomes from Pseudomonas syringae-infected Nicotiana benthamiana leaves to display the presence of a beta-galactosidase inhibitor.

  10. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.

  11. Zebrafish tracking using convolutional neural networks

    PubMed Central

    XU, Zhiping; Cheng, Xi En

    2017-01-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable. PMID:28211462

  12. Zebrafish tracking using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xu, Zhiping; Cheng, Xi En

    2017-02-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  13. Brain and art: illustrations of the cerebral convolutions. A review.

    PubMed

    Lazić, D; Marinković, S; Tomić, I; Mitrović, D; Starčević, A; Milić, I; Grujičić, M; Marković, B

    2014-08-01

    Aesthetics and functional significance of the cerebral cortical relief gave us the idea to find out how often the convolutions are presented in fine art, and in which techniques, conceptual meaning and pathophysiological aspect. We examined 27,614 art works created by 2,856 authors and presented in art literature, and in Google images search. The cerebral gyri were shown in 0.85% of the art works created by 2.35% of the authors. The concept of the brain was first mentioned in ancient Egypt some 3,700 years ago. The first artistic drawing of the convolutions was made by Leonardo da Vinci, and the first colour picture by an unknown Italian author. Rembrandt van Rijn was the first to paint the gyri. Dozens of modern authors, who are professional artists, medical experts or designers, presented the cerebralc onvolutions in drawings, paintings, digital works or sculptures, with various aesthetic, symbolic and metaphorical connotation. Some artistic compositions and natural forms show a gyral pattern. The convolutions, whose cortical layers enable the cognitive functions, can be affected by various disorders. Some artists suffered from those disorders, and some others presented them in their artworks. The cerebral convolutions or gyri, thanks to their extensive cortical mantle, are the specific morphological basis for the human mind, but also the structures with their own aesthetics. Contemporary authors relatively often depictor model the cerebral convolutions, either from the aesthetic or conceptual aspect. In this way, they make a connection between the neuroscience and fineart.

  14. A review of molecular modelling of electric double layer capacitors.

    PubMed

    Burt, Ryan; Birkett, Greg; Zhao, X S

    2014-04-14

    Electric double-layer capacitors are a family of electrochemical energy storage devices that offer a number of advantages, such as high power density and long cyclability. In recent years, research and development of electric double-layer capacitor technology has been growing rapidly, in response to the increasing demand for energy storage devices from emerging industries, such as hybrid and electric vehicles, renewable energy, and smart grid management. The past few years have witnessed a number of significant research breakthroughs in terms of novel electrodes, new electrolytes, and fabrication of devices, thanks to the discovery of innovative materials (e.g. graphene, carbide-derived carbon, and templated carbon) and the availability of advanced experimental and computational tools. However, some experimental observations could not be clearly understood and interpreted due to limitations of traditional theories, some of which were developed more than one hundred years ago. This has led to significant research efforts in computational simulation and modelling, aimed at developing new theories, or improving the existing ones to help interpret experimental results. This review article provides a summary of research progress in molecular modelling of the physical phenomena taking place in electric double-layer capacitors. An introduction to electric double-layer capacitors and their applications, alongside a brief description of electric double layer theories, is presented first. Second, molecular modelling of ion behaviours of various electrolytes interacting with electrodes under different conditions is reviewed. Finally, key conclusions and outlooks are given. Simulations on comparing electric double-layer structure at planar and porous electrode surfaces under equilibrium conditions have revealed significant structural differences between the two electrode types, and porous electrodes have been shown to store charge more efficiently. Accurate electrolyte and

  15. Models of Intracavity Frequency Doubled Lasers

    DTIC Science & Technology

    1990-01-01

    bifurcations. I thank George Gray, Gautam Vemuri, John Thompson and Larry Fabiny for their patient instruction on details of lasers and optics. I recognize...analysis for the multimode Baer equations was begun by P. Mandel and X.- G . Wu (Wu and Mandel, 1985; 1987). Their analysis focuses on the two-mode case, 26...1974). The differential equations model the time dependence of the single intensity I1 and the associated gain G1 (Baer, 1986): 27 d1I= 1 1( G - a 11

  16. Helium in double-detonation models of type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Boyle, Aoife; Sim, Stuart A.; Hachinger, Stephan; Kerzendorf, Wolfgang

    2017-03-01

    The double-detonation explosion model has been considered a candidate for explaining astrophysical transients with a wide range of luminosities. In this model, a carbon-oxygen white dwarf star explodes following detonation of a surface layer of helium. One potential signature of this explosion mechanism is the presence of unburned helium in the outer ejecta, left over from the surface helium layer. In this paper we present simple approximations to estimate the optical depths of important He i lines in the ejecta of double-detonation models. We use these approximations to compute synthetic spectra, including the He i lines, for double-detonation models obtained from hydrodynamical explosion simulations. Specifically, we focus on photospheric-phase predictions for the near-infrared 10 830 Å and 2 μm lines of He i. We first consider a double detonation model with a luminosity corresponding roughly to normal SNe Ia. This model has a post-explosion unburned He mass of 0.03 M⊙ and our calculations suggest that the 2 μm feature is expected to be very weak but that the 10 830 Å feature may have modest opacity in the outer ejecta. Consequently, we suggest that a moderate-to-weak He i 10 830 Å feature may be expected to form in double-detonation explosions at epochs around maximum light. However, the high velocities of unburned helium predicted by the model ( 19 000 km s-1) mean that the He i 10 830 Å feature may be confused or blended with the C i 10 690 Å line forming at lower velocities. We also present calculations for the He i 10 830 Å and 2 μm lines for a lower mass (low luminosity) double detonation model, which has a post-explosion He mass of 0.077 M⊙. In this case, both the He i features we consider are strong and can provide a clear observational signature of the double-detonation mechanism.

  17. A Simple Double-Source Model for Interference of Capillaries

    ERIC Educational Resources Information Center

    Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua

    2012-01-01

    A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…

  18. A large deformation viscoelastic model for double-network hydrogels

    NASA Astrophysics Data System (ADS)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  19. A Simple Double-Source Model for Interference of Capillaries

    ERIC Educational Resources Information Center

    Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua

    2012-01-01

    A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…

  20. Multilabel Image Annotation Based on Double-Layer PLSA Model

    PubMed Central

    Zhang, Jing; Li, Da; Hu, Weiwei; Chen, Zhihua; Yuan, Yubo

    2014-01-01

    Due to the semantic gap between visual features and semantic concepts, automatic image annotation has become a difficult issue in computer vision recently. We propose a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA) in this paper. The new double-layer PLSA model is constructed to bridge the low-level visual features and high-level semantic concepts of images for effective image understanding. The low-level features of images are represented as visual words by Bag-of-Words model; latent semantic topics are obtained by the first layer PLSA from two aspects of visual and texture, respectively. Furthermore, we adopt the second layer PLSA to fuse the visual and texture latent semantic topics and achieve a top-layer latent semantic topic. By the double-layer PLSA, the relationships between visual features and semantic concepts of images are established, and we can predict the labels of new images by their low-level features. Experimental results demonstrate that our automatic image annotation model based on double-layer PLSA can achieve promising performance for labeling and outperform previous methods on standard Corel dataset. PMID:24999490

  1. Discrete singular convolution mapping methods for solving singular boundary value and boundary layer problems

    NASA Astrophysics Data System (ADS)

    Pindza, Edson; Maré, Eben

    2017-03-01

    A modified discrete singular convolution method is proposed. The method is based on the single (SE) and double (DE) exponential transformation to speed up the convergence of the existing methods. Numerical computations are performed on a wide variety of singular boundary value and singular perturbed problems in one and two dimensions. The obtained results from discrete singular convolution methods based on single and double exponential transformations are compared with each other, and with the existing methods too. Numerical results confirm that these methods are considerably efficient and accurate in solving singular and regular problems. Moreover, the method can be applied to a wide class of nonlinear partial differential equations.

  2. Compressed imaging by sparse random convolution.

    PubMed

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien

    2016-01-25

    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.

  3. The Convolution Method in Neutrino Physics Searches

    SciTech Connect

    Tsakstara, V.; Kosmas, T. S.; Chasioti, V. C.; Divari, P. C.; Sinatkas, J.

    2007-12-26

    We concentrate on the convolution method used in nuclear and astro-nuclear physics studies and, in particular, in the investigation of the nuclear response of various neutrino detection targets to the energy-spectra of specific neutrino sources. Since the reaction cross sections of the neutrinos with nuclear detectors employed in experiments are extremely small, very fine and fast convolution techniques are required. Furthermore, sophisticated de-convolution methods are also needed whenever a comparison between calculated unfolded cross sections and existing convoluted results is necessary.

  4. Mobile Stride Length Estimation with Deep Convolutional Neural Networks.

    PubMed

    Hannink, Julius; Kautz, Thomas; Pasluosta, Cristian; Barth, Jens; Schulein, Samuel; Gassmann, Karl-Gunter; Klucken, Jochen; Eskofier, Bjoern

    2017-03-09

    Accurate estimation of spatial gait characteristics is critical to assess motor impairments resulting from neurological or musculoskeletal disease. Currently, however, methodological constraints limit clinical applicability of state-ofthe- art double integration approaches to gait patterns with a clear zero-velocity phase. We describe a novel approach to stride length estimation that uses deep convolutional neural networks to map stride-specific inertial sensor data to the resulting stride length. The model is trained on a publicly available and clinically relevant benchmark dataset consisting of 1220 strides from 101 geriatric patients. Evaluation is done in a 10-fold cross validation and for three different stride definitions. Even though best results are achieved with strides defined from mid-stance to mid-stance with average accuracy and precision of 0.01 ± 5.37 cm, performance does not strongly depend on stride definition. The achieved precision outperforms state-of-the-art methods evaluated on the same benchmark dataset by 3.0 cm (36%). Due to the independence of stride definition, the proposed method is not subject to the methodological constrains that limit applicability of state-of-the-art double integration methods. Furthermore, it was possible to improve precision on the benchmark dataset. With more precise mobile stride length estimation, new insights to the progression of neurological disease or early indications might be gained. Due to the independence of stride definition, previously uncharted diseases in terms of mobile gait analysis can now be investigated by re-training and applying the proposed method.

  5. Double scaling in tensor models with a quartic interaction

    NASA Astrophysics Data System (ADS)

    Dartois, Stéphane; Gurau, Razvan; Rivasseau, Vincent

    2013-09-01

    In this paper we identify and analyze in detail the subleading contributions in the 1 /N expansion of random tensors, in the simple case of a quartically interacting model. The leading order for this 1 /N expansion is made of graphs, called melons, which are dual to particular triangulations of the D-dimensional sphere, closely related to the "stacked" triangulations. For D < 6 the subleading behavior is governed by a larger family of graphs, hereafter called cherry trees, which are also dual to the D-dimensional sphere. They can be resummed explicitly through a double scaling limit. In sharp contrast with random matrix models, this double scaling limit is stable. Apart from its unexpected upper critical dimension 6, it displays a singularity at fixed distance from the origin and is clearly the first step in a richer set of yet to be discovered multi-scaling limits.

  6. New statistical lattice model with double honeycomb symmetry

    NASA Astrophysics Data System (ADS)

    Naji, S.; Belhaj, A.; Labrim, H.; Bhihi, M.; Benyoussef, A.; El Kenz, A.

    2014-04-01

    Inspired from the connection between Lie symmetries and two-dimensional materials, we propose a new statistical lattice model based on a double hexagonal structure appearing in the G2 symmetry. We first construct an Ising-1/2 model, with spin values σ = ±1, exhibiting such a symmetry. The corresponding ground state shows the ferromagnetic, the antiferromagnetic, the partial ferrimagnetic and the topological ferrimagnetic phases depending on the exchange couplings. Then, we examine the phase diagrams and the magnetization using the mean field approximation (MFA). Among others, it has been suggested that the present model could be localized between systems involving the triangular and the single hexagonal lattice geometries.

  7. Double transitions in the fully frustrated XY model

    NASA Astrophysics Data System (ADS)

    Jeon, Gun Sang; Park, Sung Yong; Choi, M. Y.

    1997-06-01

    The fully frustrated XY model is studied via the position-space renormalization group approach. The model is mapped into two coupled XY models, for which the scaling equations are derived. By integrating directly the scaling equations, we observe that there exists a narrow temperature range in which both the vortex and coupling charge fugacities grow large, suggesting double transitions in the system. While the transition at lower temperature is identified to be of the Kosterlitz-Thouless type, the higher-temperature one appears not to be of the Ising universality class.

  8. Simplified Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1986-01-01

    Some complicated intermediate steps shortened or eliminated. Decoding of convolutional error-correcting digital codes simplified by new errortrellis syndrome technique. In new technique, syndrome vector not computed. Instead, advantage taken of newly-derived mathematical identities simplify decision tree, folding it back on itself into form called "error trellis." This trellis graph of all path solutions of syndrome equations. Each path through trellis corresponds to specific set of decisions as to received digits. Existing decoding algorithms combined with new mathematical identities reduce number of combinations of errors considered and enable computation of correction vector directly from data and check bits as received.

  9. Reliable estimation of prediction errors for QSAR models under model uncertainty using double cross-validation.

    PubMed

    Baumann, Désirée; Baumann, Knut

    2014-01-01

    Generally, QSAR modelling requires both model selection and validation since there is no a priori knowledge about the optimal QSAR model. Prediction errors (PE) are frequently used to select and to assess the models under study. Reliable estimation of prediction errors is challenging - especially under model uncertainty - and requires independent test objects. These test objects must not be involved in model building nor in model selection. Double cross-validation, sometimes also termed nested cross-validation, offers an attractive possibility to generate test data and to select QSAR models since it uses the data very efficiently. Nevertheless, there is a controversy in the literature with respect to the reliability of double cross-validation under model uncertainty. Moreover, systematic studies investigating the adequate parameterization of double cross-validation are still missing. Here, the cross-validation design in the inner loop and the influence of the test set size in the outer loop is systematically studied for regression models in combination with variable selection. Simulated and real data are analysed with double cross-validation to identify important factors for the resulting model quality. For the simulated data, a bias-variance decomposition is provided. The prediction errors of QSAR/QSPR regression models in combination with variable selection depend to a large degree on the parameterization of double cross-validation. While the parameters for the inner loop of double cross-validation mainly influence bias and variance of the resulting models, the parameters for the outer loop mainly influence the variability of the resulting prediction error estimate. Double cross-validation reliably and unbiasedly estimates prediction errors under model uncertainty for regression models. As compared to a single test set, double cross-validation provided a more realistic picture of model quality and should be preferred over a single test set.

  10. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    SciTech Connect

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R.D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that

  11. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    DOE PAGES

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; ...

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R.D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed

  12. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    NASA Astrophysics Data System (ADS)

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.

    2014-12-01

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator's vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator's vacuum-insulator stack (at a radius of 1.6 m) by using standard D -dot and B -dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator's magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z . These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient

  13. Stacked Convolutional Denoising Auto-Encoders for Feature Representation.

    PubMed

    Du, Bo; Xiong, Wei; Wu, Jia; Zhang, Lefei; Zhang, Liangpei; Tao, Dacheng

    2016-03-16

    Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks.

  14. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  15. Cantilever tilt causing amplitude related convolution in dynamic mode atomic force microscopy.

    PubMed

    Wang, Chunmei; Sun, Jielin; Itoh, Hiroshi; Shen, Dianhong; Hu, Jun

    2011-01-01

    It is well known that the topography in atomic force microscopy (AFM) is a convolution of the tip's shape and the sample's geometry. The classical convolution model was established in contact mode assuming a static probe, but it is no longer valid in dynamic mode AFM. It is still not well understood whether or how the vibration of the probe in dynamic mode affects the convolution. Such ignorance complicates the interpretation of the topography. Here we propose a convolution model for dynamic mode by taking into account the typical design of the cantilever tilt in AFMs, which leads to a different convolution from that in contact mode. Our model indicates that the cantilever tilt results in a dynamic convolution affected by the absolute value of the amplitude, especially in the case that corresponding contact convolution has sharp edges beyond certain angle. The effect was experimentally demonstrated by a perpendicular SiO(2)/Si super-lattice structure. Our model is useful for quantitative characterizations in dynamic mode, especially in probe characterization and critical dimension measurements.

  16. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  17. Double porosity modeling in elastic wave propagation for reservoir characterization

    SciTech Connect

    Berryman, J. G., LLNL

    1998-06-01

    Phenomenological equations for the poroelastic behavior of a double porosity medium have been formulated and the coefficients in these linear equations identified. The generalization from a single porosity model increases the number of independent coefficients from three to six for an isotropic applied stress. In a quasistatic analysis, the physical interpretations are based upon considerations of extremes in both spatial and temporal scales. The limit of very short times is the one most relevant for wave propagation, and in this case both matrix porosity and fractures behave in an undrained fashion. For the very long times more relevant for reservoir drawdown,the double porosity medium behaves as an equivalent single porosity medium At the macroscopic spatial level, the pertinent parameters (such as the total compressibility) may be determined by appropriate field tests. At the mesoscopic scale pertinent parameters of the rock matrix can be determined directly through laboratory measurements on core, and the compressibility can be measured for a single fracture. We show explicitly how to generalize the quasistatic results to incorporate wave propagation effects and how effects that are usually attributed to squirt flow under partially saturated conditions can be explained alternatively in terms of the double-porosity model. The result is therefore a theory that generalizes, but is completely consistent with, Biot`s theory of poroelasticity and is valid for analysis of elastic wave data from highly fractured reservoirs.

  18. Object tracking with double-dictionary appearance model

    NASA Astrophysics Data System (ADS)

    Lv, Li; Fan, Tanghuai; Sun, Zhen; Wang, Jun; Xu, Lizhong

    2016-08-01

    Dictionary learning has previously been applied to target tracking across images in video sequences. However, most trackers that use dictionary learning neglect to make optimal use of the representation coefficients to locate the target. This increases the possibility of losing the target in the presence of similar objects, or in case occlusion or rotation occurs. We propose an effective object-tracking method based on a double-dictionary appearance model under a particle filter framework. We employ a double dictionary by training template features to represent the target. This representation not only exploits the relationship between the candidate and target but also represents the target more accurately with minimal residual. We also introduce a simple and effective strategy to update the template to reduce the influence of occlusion, rotation, and drift. Experiments on challenging sequences showed that the proposed algorithm performs favorably against the state-of-the-art methods in terms of several comparative metrics.

  19. Modified cubic convolution resampling for Landsat

    NASA Technical Reports Server (NTRS)

    Prakash, A.; Mckee, B.

    1985-01-01

    An overview is given of Landsat Thematic Mapper resampling technique, including a modification of the well-known cubic convolution interpolator (nearest neighbor interpolation) used to provide geometric correction for TM data. Post launch study has shown that the modified cubic convolution interpolator can selectively enhance or suppress frequency bands in the output image. This selectivity is demonstrated on TM Band 3 imagery.

  20. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  1. Two potential quark models for double heavy baryons

    SciTech Connect

    Puchkov, A. M.; Kozhedub, A. V.

    2016-01-22

    Baryons containing two heavy quarks (QQ{sup ′} q) are treated in the Born-Oppenheimer approximation. Two non-relativistic potential models are proposed, in which the Schrödinger equation admits a separation of variables in prolate and oblate spheroidal coordinates, respectively. In the first model, the potential is equal to the sum of Coulomb potentials of the two heavy quarks, separated from each other by a distance - R and linear potential of confinement. In the second model the center distance parameter R is assumed to be purely imaginary. In this case, the potential is defined by the two-sheeted mapping with singularities being concentrated on a circle rather than at separate points. Thus, in the first model diquark appears as a segment, and in the second - as a circle. In this paper we calculate the mass spectrum of double heavy baryons in both models, and compare it with previous results.

  2. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  3. A double exponential model for biochemical oxygen demand.

    PubMed

    Mason, Ian G; McLachlan, Robert I; Gérard, Daniel T

    2006-01-01

    Biochemical oxygen demand (BOD) exertion patterns in anaerobically treated farm dairy wastewater were investigated on a laboratory scale. Oxygen uptake was typically characterised by a period of rapid oxygen exertion, a transitional "shoulder" phase and a period of slower activity. A multi-species model, involving rapidly degradable and slowly degradable material, was developed, leading to a double exponential model of BOD exertion as follows:where t is time, BOD(u1)(') and BOD(u2)(') are apparent ultimate BOD (BOD(u)) values, and k(1) and k(2) are rate constants. The model provided an improved description of BOD exertion patterns in anaerobically treated farm dairy wastewater in comparison to a conventional single exponential model, with rapidly degradable rate constant values (k(1)) ranging from 2.74 to 17.36d(-1), whilst slowly degradable rate constant values (k(2)) averaged 0.25d(-1) (range 0.20-0.29). Rapidly and slowly degradable apparent BOD(u) estimates ranged from 20 to 140g/m(3) and 225 to 500g/m(3), respectively, giving total BOD(u) levels of 265-620g/m(3). The mean square error in the curve fitting procedure ranged between 20 and 60g(2)/m(6), with values on average 70% lower (range 31-91%) than those obtained for the single exponential model. When applied to existing data for a range of other wastewaters, the double exponential model demonstrated a superior fit to the conventional single exponential model and provided a marginally better fit than a mixed order model. It is proposed that the presence of rapidly degradable material may be indicated from the value of the first rate constant (k1) and the time to 95% saturation of the first exponential function. Further model development is required to describe observed transitional and lag phases.

  4. An effective mesoscopic model of double-stranded DNA.

    PubMed

    Jeon, Jae-Hyung; Sung, Wokyung

    2014-01-01

    Watson and Crick's epochal presentation of the double helix structure in 1953 has paved the way to intense exploration of DNA's vital functions in cells. Also, recent advances of single molecule techniques have made it possible to probe structures and mechanics of constrained DNA at length scales ranging from nanometers to microns. There have been a number of atomistic scale quantum chemical calculations or molecular level simulations, but they are too computationally demanding or analytically unfeasible to describe the DNA conformation and mechanics at mesoscopic levels. At micron scales, on the other hand, the wormlike chain model has been very instrumental in describing analytically the DNA mechanics but lacks certain molecular details that are essential in describing the hybridization, nano-scale confinement, and local denaturation. To fill this fundamental gap, we present a workable and predictive mesoscopic model of double-stranded DNA where the nucleotides beads constitute the basic degrees of freedom. With the inter-strand stacking given by an interaction between diagonally opposed monomers, the model explains with analytical simplicity the helix formation and produces a generalized wormlike chain model with the concomitant large bending modulus given in terms of the helical structure and stiffness. It also explains how the helical conformation undergoes overstretch transition to the ladder-like conformation at a force plateau, in agreement with the experiment.

  5. Dosimetric comparison of Acuros XB deterministic radiation transport method with Monte Carlo and model-based convolution methods in heterogeneous media

    PubMed Central

    Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas

    2011-01-01

    Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10

  6. Rationale-Augmented Convolutional Neural Networks for Text Classification

    PubMed Central

    Zhang, Ye; Marshall, Iain; Wallace, Byron C.

    2016-01-01

    We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions. PMID:28191551

  7. Generalized double-gradient model of flapping oscillations: Oblique waves

    NASA Astrophysics Data System (ADS)

    Korovinskiy, D. B.; Kiehas, S. A.

    2016-09-01

    The double-gradient model of flapping oscillations is generalized for oblique plane waves, propagating in the equatorial plane. It is found that longitudinal propagation (ky = 0) is prohibited, while transversal (kx = 0) or nearly transversal waves should possess a maximum frequency, diminishing with the reduction of | k y / k x | ratio. It turns out that the sausage mode may propagate in a narrow range of directions only, | k y / k x | ≫ 1 . A simple analytical expression for the dispersion relation of the kink mode, valid in most part of wave numbers range, | k y / k x | < 9 , is derived.

  8. Investigating GPDs in the framework of the double distribution model

    NASA Astrophysics Data System (ADS)

    Nazari, F.; Mirjalili, A.

    2016-06-01

    In this paper, we construct the generalized parton distribution (GPD) in terms of the kinematical variables x, ξ, t, using the double distribution model. By employing these functions, we could extract some quantities which makes it possible to gain a three-dimensional insight into the nucleon structure function at the parton level. The main objective of GPDs is to combine and generalize the concepts of ordinary parton distributions and form factors. They also provide an exclusive framework to describe the nucleons in terms of quarks and gluons. Here, we first calculate, in the Double Distribution model, the GPD based on the usual parton distributions arising from the GRV and CTEQ phenomenological models. Obtaining quarks and gluons angular momenta from the GPD, we would be able to calculate the scattering observables which are related to spin asymmetries of the produced quarkonium. These quantities are represented by AN and ALS. We also calculate the Pauli and Dirac form factors in deeply virtual Compton scattering. Finally, in order to compare our results with the existing experimental data, we use the difference of the polarized cross-section for an initial longitudinal leptonic beam and unpolarized target particles (ΔσLU). In all cases, our obtained results are in good agreement with the available experimental data.

  9. Analytical threshold voltage modeling of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Balraj; Kumar, Sanjay; Singh, Kunal; Jit, Satyabrata

    2017-04-01

    Two dimensional threshold voltage model of ion-implanted strained-Si double-material double-gate MOSFETs has been done based on the solution of two dimensional Poisson's equation in the channel region using the parabolic approximation method. Novelty of the proposed device structure lies in the amalgamation of the advantages of both the strained-Si channel and double-material double-gate structure with a vertical Gaussian-like doping profile. The effects of different device parameters (such as device channel length, gate length ratios, germanium mole fraction) and doping parameters (such as projected range, straggle parameter) on threshold voltage of the proposed structure have been investigated. It is observed that the subthreshold performance of the device can be improved by simply controlling the doping parameters while maintaining other device parameters constant. The modeling results show a good agreement with the numerical simulation data obtained by using ATLAS™, a 2D device simulator from SILVACO.

  10. Analytical threshold voltage modeling of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Balraj; Kumar, Sanjay; Singh, Kunal; Jit, Satyabrata

    2016-09-01

    Two dimensional threshold voltage model of ion-implanted strained-Si double-material double-gate MOSFETs has been done based on the solution of two dimensional Poisson's equation in the channel region using the parabolic approximation method. Novelty of the proposed device structure lies in the amalgamation of the advantages of both the strained-Si channel and double-material double-gate structure with a vertical Gaussian-like doping profile. The effects of different device parameters (such as device channel length, gate length ratios, germanium mole fraction) and doping parameters (such as projected range, straggle parameter) on threshold voltage of the proposed structure have been investigated. It is observed that the subthreshold performance of the device can be improved by simply controlling the doping parameters while maintaining other device parameters constant. The modeling results show a good agreement with the numerical simulation data obtained by using ATLAS™, a 2D device simulator from SILVACO.

  11. Three-Triplet Model with Double SU(3) Symmetry

    DOE R&D Accomplishments Database

    Han, M. Y.; Nambu, Y.

    1965-01-01

    With a view to avoiding some of the kinematical and dynamical difficulties involved in the single triplet quark model, a model for the low lying baryons and mesons based on three triplets with integral charges is proposed, somewhat similar to the two-triplet model introduced earlier by one of us (Y. N.). It is shown that in a U(3) scheme of triplets with integral charges, one is naturally led to three triplets located symmetrically about the origin of I{sub 3} - Y diagram under the constraint that Nishijima-Gell-Mann relation remains intact. A double SU(3) symmetry scheme is proposed in which the large mass splittings between different representations are ascribed to one of the SU(3), while the other SU(3) is the usual one for the mass splittings within a representation of the first SU(3).

  12. Modeling Flow in Porous Media with Double Porosity/Permeability.

    NASA Astrophysics Data System (ADS)

    Seyed Joodat, S. H.; Nakshatrala, K. B.; Ballarini, R.

    2016-12-01

    Although several continuum models are available to study the flow of fluids in porous media with two pore-networks [1], they lack a firm theoretical basis. In this poster presentation, we will present a mathematical model with firm thermodynamic basis and a robust computational framework for studying flow in porous media that exhibit double porosity/permeability. The mathematical model will be derived by appealing to the maximization of rate of dissipation hypothesis, which ensures that the model is in accord with the second law of thermodynamics. We will also present important properties that the solutions under the model satisfy, along with an analytical solution procedure based on the Green's function method. On the computational front, a stabilized mixed finite element formulation will be derived based on the variational multi-scale formalism. The equal-order interpolation, which is computationally the most convenient, is stable under this formulation. The performance of this formulation will be demonstrated using patch tests, numerical convergence study, and representative problems. It will be shown that the pressure and velocity profiles under the double porosity/permeability model are qualitatively and quantitatively different from the corresponding ones under the classical Darcy equations. Finally, it will be illustrated that the surface pore-structure is not sufficient in characterizing the flow through a complex porous medium, which pitches a case for using advanced characterization tools like micro-CT. References [1] G. I. Barenblatt, I. P. Zheltov, and I. N. Kochina, "Basic concepts in the theory of seepage of homogeneous liquids in fissured rocks [strata]," Journal of Applied Mathematics and Mechanics, vol. 24, pp. 1286-1303, 1960.

  13. The Double Counting Problem in Neighborhood Scale Air Quality Modeling

    NASA Astrophysics Data System (ADS)

    Du, S.; Hughes, V.; Woodhouse, L.; Servin, A.

    2004-12-01

    Air quality varies considerably within megacities. In certain neighborhoods concentrations of toxic air contaminants (TACs) can be appreciably higher than that in other neighborhoods of the same city. These pockets of high concentrations are associated with both transport of TACs from other areas and local emissions. In order to assess the health risks imposed by TACs at neighborhood scale and to develop strategies of abatement, neighborhood scale air quality modeling is needed. In 1999, the California Air Resources Board (ARB) established the Neighborhood Assessment Program (NAP) - a program designed to develop assessment tools for evaluating and understanding air quality in California communities. As part of the Neighborhood Assessment Program, ARB is conducting research on neighborhood-scale modeling methodologies. Two criteria are suggested to select a neighborhood scale air quality modeling system that can be used to assess concentrations of TACs: scientific soundness and balancing computational requirements. The latter criterion ensures that as many interested parties as possible can participate the process of air quality modeling so that they have a better understanding of air quality issues and make best use of air quality modeling results in their neighborhoods. Based on these two selection criteria a hybrid approach is recommended. This hybrid approach is a combination of using both a regional scale air quality model to assess the contributions from sources that are not located within the neighborhood of interest and a microscale model to assess the impact from the local sources that are within the neighborhood. However, one of the modeling system selection criteria, balancing computational requirements, dictates that all sources (both within and outside the neighborhood of interest) must be included in the regional scale modeling. A potential problem, referred to as double counting, arises because some local sources are included in both regional and

  14. A Double Scattering Analytical Model For Elastic Recoil Detection Analysis

    SciTech Connect

    Barradas, N. P.; Lorenz, K.; Alves, E.; Darakchieva, V.

    2011-06-01

    We present an analytical model for calculation of double scattering in elastic recoil detection measurements. Only events involving the beam particle and the recoil are considered, i.e. 1) an ion scatters off a target element and then produces a recoil, and 2) an ion produces a recoil which then scatters off a target element. Events involving intermediate recoils are not considered, i.e. when the primary ion produces a recoil which then produces a second recoil. If the recoil element is also present in the stopping foil, recoil events in the stopping foil are also calculated. We included the model in the standard code for IBA data analysis NDF, and applied it to the measurement of hydrogen in Si.

  15. The analysis of VERITAS muon images using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Feng, Qi; Lin, Tony T. Y.; VERITAS Collaboration

    2017-06-01

    Imaging atmospheric Cherenkov telescopes (IACTs) are sensitive to rare gamma-ray photons, buried in the background of charged cosmic-ray (CR) particles, the flux of which is several orders of magnitude greater. The ability to separate gamma rays from CR particles is important, as it is directly related to the sensitivity of the instrument. This gamma-ray/CR-particle classification problem in IACT data analysis can be treated with the rapidly-advancing machine learning algorithms, which have the potential to outperform the traditional box-cut methods on image parameters. We present preliminary results of a precise classification of a small set of muon events using a convolutional neural networks model with the raw images as input features. We also show the possibility of using the convolutional neural networks model for regression problems, such as the radius and brightness measurement of muon events, which can be used to calibrate the throughput efficiency of IACTs.

  16. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. NRZ Data Asymmetry Corrector and Convolutional Encoder

    NASA Technical Reports Server (NTRS)

    Pfiffner, H. J.

    1983-01-01

    Circuit compensates for timing, amplitude and symmetry perturbations. Data asymmetry corrector and convolutional encoder regenerate data and clock signals in spite of signal variations such as data or clock asymmetry, phase errors, and amplitude variations, then encode data for transmission.

  18. Parallel architectures for computing cyclic convolutions

    NASA Technical Reports Server (NTRS)

    Yeh, C.-S.; Reed, I. S.; Truong, T. K.

    1983-01-01

    In the paper two parallel architectural structures are developed to compute one-dimensional cyclic convolutions. The first structure is based on the Chinese remainder theorem and Kung's pipelined array. The second structure is a direct mapping from the mathematical definition of a cyclic convolution to a computational architecture. To compute a d-point cyclic convolution the first structure needs d/2 inner product cells, while the second structure and Kung's linear array require d cells. However, to compute a cyclic convolution, the second structure requires less time than both the first structure and Kung's linear array. Another application of the second structure is to multiply a Toeplitz matrix by a vector. A table is listed to compare these two structures and Kung's linear array. Both structures are simple and regular and are therefore suitable for VLSI implementation.

  19. Parallel architectures for computing cyclic convolutions

    NASA Technical Reports Server (NTRS)

    Yeh, C.-S.; Reed, I. S.; Truong, T. K.

    1983-01-01

    In the paper two parallel architectural structures are developed to compute one-dimensional cyclic convolutions. The first structure is based on the Chinese remainder theorem and Kung's pipelined array. The second structure is a direct mapping from the mathematical definition of a cyclic convolution to a computational architecture. To compute a d-point cyclic convolution the first structure needs d/2 inner product cells, while the second structure and Kung's linear array require d cells. However, to compute a cyclic convolution, the second structure requires less time than both the first structure and Kung's linear array. Another application of the second structure is to multiply a Toeplitz matrix by a vector. A table is listed to compare these two structures and Kung's linear array. Both structures are simple and regular and are therefore suitable for VLSI implementation.

  20. Utilization of low-redundancy convolutional codes

    NASA Technical Reports Server (NTRS)

    Cain, J. B.

    1973-01-01

    This paper suggests guidelines for the utilization of low-redundancy convolutional codes with emphasis on providing a quick look capability (no decoding) and a moderate amount of coding gain. The performance and implementation complexity of threshold, Viterbi, and sequential decoding when used with low-redundancy, systematic, convolutional codes is discussed. An extensive list of optimum, short constraint length codes is found for use with Viterbi decoding, and several good, long constraint length codes are found for use with sequential decoding.

  1. A note on cubic convolution interpolation.

    PubMed

    Meijering, Erik; Unser, Michael

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  2. Convolution of large 3D images on GPU and its decomposition

    NASA Astrophysics Data System (ADS)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  3. A hybrid double-observer sightability model for aerial surveys

    USGS Publications Warehouse

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  4. Double resonance in the infinite-range quantum Ising model.

    PubMed

    Han, Sung-Guk; Um, Jaegon; Kim, Beom Jun

    2012-08-01

    We study quantum resonance behavior of the infinite-range kinetic Ising model at zero temperature. Numerical integration of the time-dependent Schrödinger equation in the presence of an external magnetic field in the z direction is performed at various transverse field strengths g. It is revealed that two resonance peaks occur when the energy gap matches the external driving frequency at two distinct values of g, one below and the other above the quantum phase transition. From the similar observations already made in classical systems with phase transitions, we propose that the double resonance peaks should be a generic feature of continuous transitions, for both quantum and classical many-body systems.

  5. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  6. Statistical Downscaling using Super Resolution Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Vandal, T.; Ganguly, S.; Ganguly, A. R.; Kodra, E.

    2016-12-01

    We present a novel approach to statistical downscaling using image super-resolution and convolutional neural networks. Image super-resolution (SR), a widely researched topic in the machine learning community, aims to increase the resolution of low resolution images, similar to the goal of downscaling Global Circulation Models (GCMs). With SR we are able to capture and generalize spatial patterns in the climate by representing each climate state as an "image". In particular, we show the applicability of Super Resolution Convolutional Neural Networks (SRCNN) to downscaling daily precipitation in the United States. SRCNN is a state-of-the-art single image SR method and has the advantage of utilizing multiple input variables, known as channels. We apply SRCNN to downscaling precipitation by using low resolution precipitation and high resolution elevation as inputs and compare to bias correction spatial disaggregation (BCSD).

  7. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  8. Deep Convolutional Neural Network for Inverse Problems in Imaging

    NASA Astrophysics Data System (ADS)

    Jin, Kyong Hwan; McCann, Michael T.; Froustey, Emmanuel; Unser, Michael

    2017-09-01

    In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 x 512 image on GPU.

  9. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  10. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  11. Event Discrimination using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Menon, Hareesh; Hughes, Richard; Daling, Alec; Winer, Brian

    2017-01-01

    Convolutional Neural Networks (CNNs) are computational models that have been shown to be effective at classifying different types of images. We present a method to use CNNs to distinguish events involving the production of a top quark pair and a Higgs boson from events involving the production of a top quark pair and several quark and gluon jets. To do this, we generate and simulate data using MADGRAPH and DELPHES for a general purpose LHC detector at 13 TeV. We produce images using a particle flow algorithm by binning the particles geometrically based on their position in the detector and weighting the bins by the energy of each particle within each bin, and by defining channels based on particle types (charged track, neutral hadronic, neutral EM, lepton, heavy flavor). Our classification results are competitive with standard machine learning techniques. We have also looked into the classification of the substructure of the events, in a process known as scene labeling. In this context, we look for the presence of boosted objects (such as top quarks) with substructure encompassed within single jets. Preliminary results on substructure classification will be presented.

  12. Do Convolutional Neural Networks Learn Class Hierarchy?

    PubMed

    Alsallakh, Bilal; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2017-08-29

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  13. Medical image fusion using the convolution of Meridian distributions.

    PubMed

    Agrawal, Mayank; Tsakalides, Panagiotis; Achim, Alin

    2010-01-01

    The aim of this paper is to introduce a novel non-Gaussian statistical model-based approach for medical image fusion based on the Meridian distribution. The paper also includes a new approach to estimate the parameters of generalized Cauchy distribution. The input images are first decomposed using the Dual-Tree Complex Wavelet Transform (DT-CWT) with the subband coefficients modelled as Meridian random variables. Then, the convolution of Meridian distributions is applied as a probabilistic prior to model the fused coefficients, and the weights used to combine the source images are optimised via Maximum Likelihood (ML) estimation. The superior performance of the proposed method is demonstrated using medical images.

  14. Uncertainty estimation by convolution using spatial statistics.

    PubMed

    Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio

    2006-10-01

    Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire.

  15. Astronomical Image Subtraction by Cross-Convolution

    NASA Astrophysics Data System (ADS)

    Yuan, Fang; Akerlof, Carl W.

    2008-04-01

    In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

  16. Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2013-01-01

    We give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. For proper étale groupoids, Tu and Xu (Adv Math 207(2):455-483, 2006) provide a map between the periodic cyclic cohomology of a gerbe-twisted convolution algebra and twisted cohomology groups which is similar to the construction of Mathai and Stevenson (Adv Math 200(2):303-335, 2006). When the groupoid is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial techniques to construct a simplicial curvature 3-form representing the class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial curvature 3-form to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  17. Image reconstruction by parametric cubic convolution

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Schowengerdt, R. A.

    1983-01-01

    Cubic convolution, which has been discussed by Rifman and McKinnon (1974), was originally developed for the reconstruction of Landsat digital images. In the present investigation, the reconstruction properties of the one-parameter family of cubic convolution interpolation functions are considered and thee image degradation associated with reasonable choices of this parameter is analyzed. With the aid of an analysis in the frequency domain it is demonstrated that in an image-independent sense there is an optimal value for this parameter. The optimal value is not the standard value commonly referenced in the literature. It is also demonstrated that in an image-dependent sense, cubic convolution can be adapted to any class of images characterized by a common energy spectrum.

  18. Double piezoelectric energy harvesting cell: modeling and experimental verification

    NASA Astrophysics Data System (ADS)

    Wang, Xianfeng; Shi, Zhifei

    2017-06-01

    In this paper, a novel energy transducer named double piezoelectric energy harvesting cell (DPEHC) consisting of two flex-compressive piezoelectric energy harvesting cells (F-C PEHCs) is proposed. At the very beginning, two F-C PEHCs, a kind of cymbal type energy transducer, are assembled together sharing the same end just in order to be placed steady. However, throughout an open-circuit voltage test, additional energy harvesting performance of the DPEHC prototype appears. Taking the interaction between the two F-C PEHCs into account, a mechanical model for analyzing the DPEHC is established. The electric output of the DPEHC under harmonic excitation is obtained theoretically and verified experimentally, and good agreement is found. In addition, as an inverse problem, the method for identifying the key mechanical parameters of the DPEHC is recommended. Finally, the additional energy harvesting performance of the DPEHC is quantitatively discussed. Numerical results show that the additional energy harvesting performance of the DPEHC is correlated with the key mechanical parameters of the DPEHC. For the present DPEHC prototype, the energy harvesting addition is over 400% compared with two independent F-C PEHCs under the same load condition.

  19. Applying the Post-Modern Double ABC-X Model to Family Food Insecurity

    ERIC Educational Resources Information Center

    Hutson, Samantha; Anderson, Melinda; Swafford, Melinda

    2015-01-01

    This paper develops the argument that using the Double ABC-X model in family and consumer sciences (FCS) curricula is a way to educate nutrition and dietetics students regarding a family's perceptions of food insecurity. The Double ABC-X model incorporates ecological theory as a basis to explain family stress and the resulting adjustment and…

  20. Applying the Post-Modern Double ABC-X Model to Family Food Insecurity

    ERIC Educational Resources Information Center

    Hutson, Samantha; Anderson, Melinda; Swafford, Melinda

    2015-01-01

    This paper develops the argument that using the Double ABC-X model in family and consumer sciences (FCS) curricula is a way to educate nutrition and dietetics students regarding a family's perceptions of food insecurity. The Double ABC-X model incorporates ecological theory as a basis to explain family stress and the resulting adjustment and…

  1. Colonoscopic polyp detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty

    2016-03-01

    Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report

  2. Multihop optical network with convolutional coding

    NASA Astrophysics Data System (ADS)

    Chien, Sufong; Takahashi, Kenzo; Prasad Majumder, Satya

    2002-01-01

    We evaluate the bit-error-rate (BER) performance of a multihop optical ShuffleNet with and without convolutional coding. Computed results show that there is considerable improvement in network performance resulting from coding in terms of an increased number of traversable hops from a given transmitter power at a given BER. For a rate-1/2 convolutional code with constraint length K = 9 at BER = 10-9, the hop gains are found to be 20 hops for hot-potato routing and 7 hops for single-buffer routing at the transmitter power of 0 dBm. We can further increase the hop gain by increasing transmitter power.

  3. Fast convolution algorithms for SAR processing

    NASA Astrophysics Data System (ADS)

    dall, Jorgen

    Most high resolution SAR processors apply the Fast Fourier Transform (FFT) to implement convolution by a matched filter impulse response. However, a lower computational complexity is attainable with other algorithms which accordingly have the potential of offering faster and/or simpler processors. Thirteen different fast transform and convolution algorithms are presented, and their characteristics are compared with the fundamental requirements imposed on the algorithms by various SAR processing schemes. The most promising algorithm is based on a Fermat Number Transform (FNT). SAR-580 and SEASAT SAR images have been successfully processed with the FNT, and in this connection the range curvature correction, noise properties and processing speed are discussed.

  4. FPT Algorithm for Two-Dimensional Cyclic Convolutions

    NASA Technical Reports Server (NTRS)

    Truong, Trieu-Kie; Shao, Howard M.; Pei, D. Y.; Reed, Irving S.

    1987-01-01

    Fast-polynomial-transform (FPT) algorithm computes two-dimensional cyclic convolution of two-dimensional arrays of complex numbers. New algorithm uses cyclic polynomial convolutions of same length. Algorithm regular, modular, and expandable.

  5. Mechanisms of circumferential gyral convolution in primate brains.

    PubMed

    Zhang, Tuo; Razavi, Mir Jalil; Chen, Hanbo; Li, Yujie; Li, Xiao; Li, Longchuan; Guo, Lei; Hu, Xiaoping; Liu, Tianming; Wang, Xianqiao

    2017-06-01

    Mammalian cerebral cortices are characterized by elaborate convolutions. Radial convolutions exhibit homology across primate species and generally are easily identified in individuals of the same species. In contrast, circumferential convolutions vary across species as well as individuals of the same species. However, systematic study of circumferential convolution patterns is lacking. To address this issue, we utilized structural MRI (sMRI) and diffusion MRI (dMRI) data from primate brains. We quantified cortical thickness and circumferential convolutions on gyral banks in relation to axonal pathways and density along the gray matter/white matter boundaries. Based on these observations, we performed a series of computational simulations. Results demonstrated that the interplay of heterogeneous cortex growth and mechanical forces along axons plays a vital role in the regulation of circumferential convolutions. In contrast, gyral geometry controls the complexity of circumferential convolutions. These findings offer insight into the mystery of circumferential convolutions in primate brains.

  6. Continuous speech recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  7. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  8. Number-Theoretic Functions via Convolution Rings.

    ERIC Educational Resources Information Center

    Berberian, S. K.

    1992-01-01

    Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)

  9. Convolutions and Their Applications in Information Science.

    ERIC Educational Resources Information Center

    Rousseau, Ronald

    1998-01-01

    Presents definitions of convolutions, mathematical operations between sequences or between functions, and gives examples of their use in information science. In particular they can be used to explain the decline in the use of older literature (obsolescence) or the influence of publication delays on the aging of scientific literature. (Author/LRW)

  10. VLSI Unit for Two-Dimensional Convolutions

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.

    1983-01-01

    Universal logic structure allows same VLSI chip to be used for variety of computational functions required for two dimensional convolutions. Fast polynomial transform technique is extended into tree computational structure composed of two units: fast polynomial transform (FPT) unit and Chinese remainder theorem (CRT) computational unit.

  11. Star integrals, convolutions and simplices

    NASA Astrophysics Data System (ADS)

    Nandan, Dhritiman; Paulos, Miguel F.; Spradlin, Marcus; Volovich, Anastasia

    2013-05-01

    We explore single and multi-loop conformal integrals, such as the ones appearing in dual conformal theories in flat space. Using Mellin amplitudes, a large class of higher loop integrals can be written as simple integro-differential operators on star integrals: one-loop n-gon integrals in n dimensions. These are known to be given by volumes of hyperbolic simplices. We explicitly compute the five-dimensional pentagon integral in full generality using Schläfli's formula. Then, as a first step to understanding higher loops, we use spline technology to construct explicitly the 6 d hexagon and 8 d octagon integrals in two-dimensional kinematics. The fully massive hexagon and octagon integrals are then related to the double box and triple box integrals respectively. We comment on the classes of functions needed to express these integrals in general kinematics, involving elliptic functions and beyond.

  12. Coronary artery calcification (CAC) classification with deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Xiuming; Wang, Shice; Deng, Yufeng; Chen, Kuan

    2017-03-01

    Coronary artery calcification (CAC) is a typical marker of the coronary artery disease, which is one of the biggest causes of mortality in the U.S. This study evaluates the feasibility of using a deep convolutional neural network (DCNN) to automatically detect CAC in X-ray images. 1768 posteroanterior (PA) view chest X-Ray images from Sichuan Province Peoples Hospital, China were collected retrospectively. Each image is associated with a corresponding diagnostic report written by a trained radiologist (907 normal, 861 diagnosed with CAC). Onequarter of the images were randomly selected as test samples; the rest were used as training samples. DCNN models consisting of 2,4,6 and 8 convolutional layers were designed using blocks of pre-designed CNN layers. Each block was implemented in Theano with Graphics Processing Units (GPU). Human-in-the-loop learning was also performed on a subset of 165 images with framed arteries by trained physicians. The results from the DCNN models were compared to the diagnostic reports. The average diagnostic accuracies for models with 2,4,6,8 layers were 0.85, 0.87, 0.88, and 0.89 respectively. The areas under the curve (AUC) were 0.92, 0.95, 0.95, and 0.96. As the model grows deeper, the AUC or diagnostic accuracies did not have statistically significant changes. The results of this study indicate that DCNN models have promising potential in the field of intelligent medical image diagnosis practice.

  13. A fast convolution-based methodology to simulate 2-D/3-D cardiac ultrasound images.

    PubMed

    Gao, Hang; Choi, Hon Fai; Claus, Piet; Boonen, Steven; Jaecques, Siegfried; Van Lenthe, G Harry; Van der Perre, Georges; Lauriks, Walter; D'hooge, Jan

    2009-02-01

    This paper describes a fast convolution-based methodology for simulating ultrasound images in a 2-D/3-D sector format as typically used in cardiac ultrasound. The conventional convolution model is based on the assumption of a space-invariant point spread function (PSF) and typically results in linear images. These characteristics are not representative for cardiac data sets. The spatial impulse response method (IRM) has excellent accuracy in the linear domain; however, calculation time can become an issue when scatterer numbers become significant and when 3-D volumetric data sets need to be computed. As a solution to these problems, the current manuscript proposes a new convolution-based methodology in which the data sets are produced by reducing the conventional 2-D/3-D convolution model to multiple 1-D convolutions (one for each image line). As an example, simulated 2-D/3-D phantom images are presented along with their gray scale histogram statistics. In addition, the computation time is recorded and contrasted to a commonly used implementation of IRM (Field II). It is shown that COLE can produce anatomically plausible images with local Rayleigh statistics but at improved calculation time (1200 times faster than the reference method).

  14. Convolution theorems: partitioning the space of integral transforms

    NASA Astrophysics Data System (ADS)

    Lindsey, Alan R.; Suter, Bruce W.

    1999-03-01

    Investigating a number of different integral transforms uncovers distinct patterns in the type of translation convolution theorems afforded by each. It is shown that transforms based on separable kernels (aka Fourier, Laplace and their relatives) have a form of the convolution theorem providing for a transform domain product of the convolved functions. However, transforms based on kernels not separable in the function and transform variables mandate a convolution theorem of a different type; namely in the transform domain the convolution becomes another convolution--one function with the transform of the other.

  15. Effectiveness of Convolutional Code in Multipath Underwater Acoustic Channel

    NASA Astrophysics Data System (ADS)

    Park, Jihyun; Seo, Chulwon; Park, Kyu-Chil; Yoon, Jong Rak

    2013-07-01

    The forward error correction (FEC) is achieved by increasing redundancy of information. Convolutional coding with Viterbi decoding is a typical FEC technique in channel corrupted by additive white gaussian noise. But the FEC effectiveness of convolutional code is questioned in multipath frequency selective fading channel. In this paper, how convolutional code works in multipath channel in underwater, is examined. Bit error rates (BER) with and without 1/2 convolutional code are analyzed based on channel bandwidth which is frequency selectivity parameter. It is found that convolution code performance is well matched in non selective channel and also effective in selective channel.

  16. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  17. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm.

    PubMed

    Abbasi, Mahdi; Naghsh-Nilchi, Ahmad-Reza

    2012-06-20

    Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical reconstructions using sinc-convolution support parametric assessment results

  18. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    PubMed Central

    2012-01-01

    Background Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical reconstructions using sinc-convolution

  19. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Semi-analytical model for quasi-double-layer surface electrode ion traps

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Shuming; Wang, Yaohua

    2016-11-01

    To realize scale quantum processors, the surface-electrode ion trap is an effective scaling approach, including single-layer, double-layer, and quasi-double-layer traps. To calculate critical trap parameters such as the trap center and trap depth, the finite element method (FEM) simulation was widely used, however, it is always time consuming. Moreover, the FEM simulation is also incapable of exhibiting the direct relationship between the geometry dimension and these parameters. To eliminate the problems above, House and Madsen et al. have respectively provided analytic models for single-layer traps and double-layer traps. In this paper, we propose a semi-analytical model for quasi-double-layer traps. This model can be applied to calculate the important parameters above of the ion trap in the trap design process. With this model, we can quickly and precisely find the optimum geometry design for trap electrodes in various cases.

  1. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  2. A Construction of MDS Quantum Convolutional Codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2015-09-01

    In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.

  3. Performance of convolutionally coded unbalanced QPSK systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1980-01-01

    An evaluation is presented of the performance of three representative convolutionally coded unbalanced quadri-phase-shift-keying (UQPSK) systems in the presence of noisy carrier reference and crosstalk. The use of a coded UQPSK system for transmitting two telemetry data streams with different rates and different powers has been proposed for the Venus Orbiting Imaging Radar mission. Analytical expressions for bit error rates in the presence of a noisy carrier phase reference are derived for three representative cases: (1) I and Q channels are coded independently; (2) I channel is coded, Q channel is uncoded; and (3) I and Q channels are coded by a common 1/2 code. For rate 1/2 convolutional codes, QPSK modulation can be used to reduce the bandwidth requirement.

  4. Digital Correlation By Optical Convolution/Correlation

    NASA Astrophysics Data System (ADS)

    Trimble, Joel; Casasent, David; Psaltis, Demetri; Caimi, Frank; Carlotto, Mark; Neft, Deborah

    1980-12-01

    Attention is given to various methods by which the accuracy achieveable and the dynamic range requirements of an optical computer can be enhanced. A new time position coding acousto-optic technique for optical residue arithmetic processing is presented and experimental demonstration is included. Major attention is given to the implementation of a correlator operating on digital or decimal encoded signals. Using a convolution description of multiplication, we realize such a correlator by optical convolution in one dimension and optical correlation in the other dimension of a optical system. A coherent matched spatial filter system operating on digital encoded signals, a noncoherent processor operating on complex-valued digital-encoded data, and a real-time multi-channel acousto-optic system for such operations are described and experimental verifications are included.

  5. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  6. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  7. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  8. Benchmarks for double Higgs production in the singlet-extended standard model at the LHC

    NASA Astrophysics Data System (ADS)

    Lewis, Ian; Sullivan, Matthew

    2017-08-01

    The simplest extension of the standard model is to add a gauge singlet scalar, S : the singlet-extended standard model. In the absence of a Z2 symmetry S →-S and if the new scalar is sufficiently heavy, this model can lead to resonant double Higgs production, significantly increasing the production rate over the standard model prediction. While searches for this signal are being performed, it is important to have benchmark points and models with which to compare the experimental results. In this paper we determine these benchmarks by maximizing the double Higgs production rate at the LHC in the singlet-extended standard model. We find that, within current constraints, the branching ratio of the new scalar into two standard model-like Higgs bosons can be upwards of 0.76, and the double Higgs rate can be increased upwards of 30 times the standard model prediction.

  9. [Application of numerical convolution in in vivo/in vitro correlation research].

    PubMed

    Yue, Peng

    2009-01-01

    This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.

  10. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  11. A 3D Model of Double-Helical DNA Showing Variable Chemical Details

    ERIC Educational Resources Information Center

    Cady, Susan G.

    2005-01-01

    Since the first DNA model was created approximately 50 years ago using molecular models, students and teachers have been building simplified DNA models from various practical materials. A 3D double-helical DNA model, made by placing beads on a wire and stringing beads through holes in plastic canvas, is described. Suggestions are given to enhance…

  12. A 3D Model of Double-Helical DNA Showing Variable Chemical Details

    ERIC Educational Resources Information Center

    Cady, Susan G.

    2005-01-01

    Since the first DNA model was created approximately 50 years ago using molecular models, students and teachers have been building simplified DNA models from various practical materials. A 3D double-helical DNA model, made by placing beads on a wire and stringing beads through holes in plastic canvas, is described. Suggestions are given to enhance…

  13. Multichannel Convolutional Neural Network for Biological Relation Extraction

    PubMed Central

    Quan, Chanqin; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores. PMID:28053977

  14. Classification of Histology Sections via Multispectral Convolutional Sparse Coding.

    PubMed

    Zhou, Yin; Chang, Hang; Barner, Kenneth; Spellman, Paul; Parvin, Bahram

    2014-06-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]).

  15. Multiple deep convolutional neural networks averaging for face alignment

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.

  16. Long decoding runs for Galileo's convolutional codes

    NASA Technical Reports Server (NTRS)

    Lahmeyer, C. R.; Cheung, K.-M.

    1988-01-01

    Decoding results are described for long decoding runs of Galileo's convolutional codes. A 1 k-bit/sec hardware Viterbi decoder is used for the (15, 1/4) convolutional code, and a software Viterbi decoder is used for the (7, 1/2) convolutional code. The output data of these long runs are stored in data files using a data compression format which can reduce file size by a factor of 100 to 1 typically. These data files can be used to replicate the long, time-consuming runs exactly and are useful to anyone who wants to analyze the burst statistics of the Viterbi decoders. The 1 k-bit/sec hardware Viterbi decoder was developed in order to demonstrate the correctness of certain algorithmic concepts for decoding Galileo's experimental (15, 1/4) code, and for the long-constraint-length codes in general. The hardware decoder can be used both to search for good codes and to measure accurately the performance of known codes.

  17. Blind separation of convolutive sEMG mixtures based on independent vector analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaomei; Guo, Yina; Tian, Wenyan

    2015-12-01

    An independent vector analysis (IVA) method base on variable-step gradient algorithm is proposed in this paper. According to the sEMG physiological properties, the IVA model is applied to the frequency-domain separation of convolutive sEMG mixtures to extract motor unit action potentials information of sEMG signals. The decomposition capability of proposed method is compared to the one of independent component analysis (ICA), and experimental results show the variable-step gradient IVA method outperforms ICA in blind separation of convolutive sEMG mixtures.

  18. Computational modeling of electrophotonics nanomaterials: Tunneling in double quantum dots

    SciTech Connect

    Vlahovic, Branislav Filikhin, Igor

    2014-10-06

    Single electron localization and tunneling in double quantum dots (DQD) and rings (DQR) and in particular the localized-delocalized states and their spectral distributions are considered in dependence on the geometry of the DQDs (DQRs). The effect of violation of symmetry of DQDs geometry on the tunneling is studied in details. The cases of regular and chaotic geometries are considered. It will be shown that a small violation of symmetry drastically affects localization of electron and that anti-crossing of the levels is the mechanism of tunneling between the localized and delocalized states in DQRs.

  19. Computational analysis of current-loss mechanisms in a post-hole convolute driven by magnetically insulated transmission lines

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Madrid, E. A.; Welch, D. R.; Clark, R. E.; Mostrom, C. B.; Stygar, W. A.; Cuneo, M. E.

    2015-03-01

    Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E. A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013), 10.1103/PhysRevSTAB.16.120401] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations lead to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.

  20. Computational analysis of current-loss mechanisms in a post-hole convolute driven by magnetically insulated transmission lines

    DOE PAGES

    Rose, D.  V.; Madrid, E.  A.; Welch, D.  R.; ...

    2015-03-04

    Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less

  1. Sensor-Based Gait Parameter Extraction With Deep Convolutional Neural Networks.

    PubMed

    Hannink, Julius; Kautz, Thomas; Pasluosta, Cristian F; Gasmann, Karl-Gunter; Klucken, Jochen; Eskofier, Bjoern M

    2017-01-01

    Measurement of stride-related, biomechanical parameters is the common rationale for objective gait impairment scoring. State-of-the-art double-integration approaches to extract these parameters from inertial sensor data are, however, limited in their clinical applicability due to the underlying assumptions. To overcome this, we present a method to translate the abstract information provided by wearable sensors to context-related expert features based on deep convolutional neural networks. Regarding mobile gait analysis, this enables integration-free and data-driven extraction of a set of eight spatio-temporal stride parameters. To this end, two modeling approaches are compared: a combined network estimating all parameters of interest and an ensemble approach that spawns less complex networks for each parameter individually. The ensemble approach is outperforming the combined modeling in the current application. On a clinically relevant and publicly available benchmark dataset, we estimate stride length, width and medio-lateral change in foot angle up to -0.15 ± 6.09 cm, -0.09 ± 4.22 cm and 0.13 ± 3.78° respectively. Stride, swing and stance time as well as heel and toe contact times are estimated up to ±0.07, ±0.05, ±0.07, ±0.07 and ±0.12 s respectively. This is comparable to and in parts outperforming or defining state of the art. Our results further indicate that the proposed change in the methodology could substitute assumption-driven double-integration methods and enable mobile assessment of spatio-temporal stride parameters in clinically critical situations as, e.g., in the case of spastic gait impairments.

  2. A Theory of Cramer-Rao Bounds for Constrained Parametric Models

    DTIC Science & Technology

    2010-01-01

    are then presented in the communications context for the convolutive mixture model and the calibrated array model. Report Documentation Page Form...the communications context for the convolutive mixture model and the calibrated array model. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17...the CCRB in Communications Models 86 4.1 Convolutive Mixture Model . . . . . . . . . . . . . . . . . . . . . . . 88 4.1.1 Equivalent Convolutive

  3. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network

    PubMed Central

    Haj-Hassan, Hawraa; Chaddad, Ahmad; Harkouss, Youssef; Desrosiers, Christian; Toews, Matthew; Tanougast, Camel

    2017-01-01

    Background: Colorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca). Methods: Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. Results: An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Conclusions: Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest. PMID:28400990

  4. Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.

    PubMed

    Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua

    2017-03-27

    Deep convolutional neural network models pretrained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.

  5. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  6. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  7. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network.

    PubMed

    Haj-Hassan, Hawraa; Chaddad, Ahmad; Harkouss, Youssef; Desrosiers, Christian; Toews, Matthew; Tanougast, Camel

    2017-01-01

    Colorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca). Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest.

  8. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  9. On improvements of Double Beta Decay using FQTDA Model

    NASA Astrophysics Data System (ADS)

    de Oliveira, L.; Samana, A. R.; Krmpotic, F.; Mariano, A. E.; Barbero, C. A.

    2015-07-01

    The Quasiparticle Tamm-Dancoff Approximation (QTDA) is applied to describe the nuclear double beta decay with two neutrinos. Several serious inconveniences found in the Quasiparticle Random Phase Approximation (QRPA) are not present in the QTDA, as such as the ambiguity in treating the intermediary states, and further approximations necessary for evaluation of the nuclear matrix elements (NMEs) or, the extreme sensitivity of NME with the ratio between the pn and pp + nn pairings. Some years ago, the decay 48Ca → 48Ti was discussed within the particle-hole limit of QTDA. We found some mismatch in the numerical calculations when the full QTDA was being implemented, and a new performance in the particle-hole limit of QTDA is required to guarantee the fidelity of the approximation.

  10. Modeling and simulation of a double auction artificial financial market

    NASA Astrophysics Data System (ADS)

    Raberto, Marco; Cincotti, Silvano

    2005-09-01

    We present a double-auction artificial financial market populated by heterogeneous agents who trade one risky asset in exchange for cash. Agents issue random orders subject to budget constraints. The limit prices of orders may depend on past market volatility. Limit orders are stored in the book whereas market orders give immediate birth to transactions. We show that fat tails and volatility clustering are recovered by means of very simple assumptions. We also investigate two important stylized facts of the limit order book, i.e., the distribution of waiting times between two consecutive transactions and the instantaneous price impact function. We show both theoretically and through simulations that if the order waiting times are exponentially distributed, even trading waiting times are also exponentially distributed.

  11. Coupled cluster Green function: Model involving single and double excitations

    SciTech Connect

    Bhaskaran-Nair, Kiran; Kowalski, Karol; Shelton, William A.

    2016-04-14

    In this paper we report on the parallel implementation of the coupled-cluster (CC) Green function formulation (GF-CC) employing single and double excitations in the cluster operator (GF-CCSD). The detailed description of the underlying algorithm is provided, including the structure of ionization-potential- and electron-affinity-type intermediate tensors which enable to formulate GF-CC approach in a computationally feasible form. Several examples including calculations of ionization-potentials and electron a*ffinities for benchmark systems, which are juxtaposed against the experimental values, provide an illustration of the accuracies attainable in the GFCCSD simulations. We also discuss the structure of the CCSD self energies and discuss approximation that are geared to reduce the computational cost while maintaining the pole structure of the full GF-CCSD approach.

  12. Deep convolutional neural network for prostate MR segmentation

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei

    2017-03-01

    Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.

  13. Learning to Generate Chairs, Tables and Cars with Convolutional Networks.

    PubMed

    Dosovitskiy, Alexey; Springenberg, Jost Tobias; Tatarchenko, Maxim; Brox, Thomas

    2017-04-01

    We train generative 'up-convolutional' neural networks which are able to generate images of objects given object style, viewpoint, and color. We train the networks on rendered 3D models of chairs, tables, and cars. Our experiments show that the networks do not merely learn all images by heart, but rather find a meaningful representation of 3D models allowing them to assess the similarity of different models, interpolate between given views to generate the missing ones, extrapolate views, and invent new objects not present in the training set by recombining training instances, or even two different object classes. Moreover, we show that such generative networks can be used to find correspondences between different objects from the dataset, outperforming existing approaches on this task.

  14. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  15. Convolutional Architecture Exploration for Action Recognition and Image Classification

    DTIC Science & Technology

    2015-01-01

    Convolutional Architecture Exploration for Action Recognition and Image Classification JT Turner∗1, David Aha2, Leslie Smith2, and Kalyan Moy Gupta1...Intelligence; Naval Research Laboratory (Code 5514); Washington, DC 20375 Abstract Convolutional Architecture for Fast Feature Encoding (CAFFE) [11] is a soft...This is especially true with convolutional neural networks which depend upon the architecture to detect edges and objects in the same way the human

  16. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  17. Semileptonic decays of double heavy baryons in a relativistic constituent three-quark model

    SciTech Connect

    Faessler, Amand; Gutsche, Thomas; Lyubovitskij, Valery E.; Ivanov, Mikhail A.; Koerner, Juergen G.

    2009-08-01

    We study the semileptonic decays of double-heavy baryons using a manifestly Lorentz covariant constituent three-quark model. We present complete results on transition form factors between double-heavy baryons for finite values of the heavy quark/baryon masses and in the heavy quark symmetry limit, which is valid at and close to zero recoil. Decay rates are calculated and compared to each other in the full theory, keeping masses finite, and also in the heavy quark limit.

  18. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    SciTech Connect

    Gao Yajun

    2008-08-15

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  19. Neutrinoless double beta decay in the left-right symmetric models for linear seesaw

    NASA Astrophysics Data System (ADS)

    Gu, Pei-Hong

    2016-09-01

    In a class of left-right symmetric models for linear seesaw, a neutrinoless double beta decay induced by the left- and right-handed charged currents together will only depend on the breaking details of left-right and electroweak symmetries. This neutrinoless double beta decay can reach the experimental sensitivities if the right-handed charged gauge boson is below the 100TeV scale.

  20. A fluence-convolution method to calculate radiation therapy dose distributions that incorporate random set-up error

    NASA Astrophysics Data System (ADS)

    Beckham, W. A.; Keall, P. J.; Siebers, J. V.

    2002-10-01

    The International Commission on Radiation Units and Measurements Report 62 (ICRU 1999) introduced the concept of expanding the clinical target volume (CTV) to form the planning target volume by a two-step process. The first step is adding a clinically definable internal margin, which produces an internal target volume that accounts for the size, shape and position of the CTV in relation to anatomical reference points. The second is the use of a set-up margin (SM) that incorporates the uncertainties of patient beam positioning, i.e. systematic and random set-up errors. We propose to replace the random set-up error component of the SM by explicitly incorporating the random set-up error into the dose-calculation model by convolving the incident photon beam fluence with a Gaussian set-up error kernel. This fluence-convolution method was implemented into a Monte Carlo (MC) based treatment-planning system. Also implemented for comparison purposes was a dose-matrix-convolution algorithm similar to that described by Leong (1987 Phys. Med. Biol. 32 327-34). Fluence and dose-matrix-convolution agree in homogeneous media. However, for the heterogeneous phantom calculations, discrepancies of up to 5% in the dose profiles were observed with a 0.4 cm set-up error value. Fluence-convolution mimics reality more closely, as dose perturbations at interfaces are correctly predicted (Wang et al 1999 Med. Phys. 26 2626-34, Sauer 1995 Med. Phys. 22 1685-90). Fluence-convolution effectively decouples the treatment beams from the patient, and more closely resembles the reality of particle fluence distributions for many individual beam-patient set-ups. However, dose-matrix-convolution reduces the random statistical noise in MC calculations. Fluence-convolution can easily be applied to convolution/superposition based dose-calculation algorithms.

  1. Modelling the nonlinear behaviour of double walled carbon nanotube based resonator with curvature factors

    NASA Astrophysics Data System (ADS)

    Patel, Ajay M.; Joshi, Anand Y.

    2016-10-01

    This paper deals with the nonlinear vibration analysis of a double walled carbon nanotube based mass sensor with curvature factor or waviness, which is doubly clamped at a source and a drain. Nonlinear vibrational behaviour of a double-walled carbon nanotube excited harmonically near its primary resonance is considered. The double walled carbon nanotube is harmonically excited by the addition of an excitation force. The modelling involves stretching of the mid plane and damping as per phenomenon. The equation of motion involves four nonlinear terms for inner and outer tubes of DWCNT due to the curved geometry and the stretching of the central plane due to the boundary conditions. The vibrational behaviour of the double walled carbon nanotube with different surface deviations along its axis is analyzed in the context of the time response, Poincaré maps and Fast Fourier Transformation diagrams. The appearance of instability and chaos in the dynamic response is observed as the curvature factor on double walled carbon nanotube is changed. The phenomenon of Periodic doubling and intermittency are observed as the pathway to chaos. The regions of periodic, sub-harmonic and chaotic behaviour are clearly seen to be dependent on added mass and the curvature factors in the double walled carbon nanotube. Poincaré maps and frequency spectra are used to explicate and to demonstrate the miscellany of the system behaviour. With the increase in the curvature factor system excitations increases and results in an increase of the vibration amplitude with reduction in excitation frequency.

  2. Human Parsing with Contextualized Convolutional Neural Network.

    PubMed

    Liang, Xiaodan; Xu, Chunyan; Shen, Xiaohui; Yang, Jianchao; Tang, Jinhui; Lin, Liang; Yan, Shuicheng

    2016-03-02

    In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, semantic edge context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic information and the local fine details across different convolutional layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global imagelevel context. Third, semantic edge context is further incorporated into Co-CNN, where the high-level semantic boundaries are leveraged to guide pixel-wise labeling. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [1] reaches 81:72% by Co-CNN, significantly higher than 62:81% and 64:38% by the state-of-the-art algorithms, MCNN [2] and ATR [1], respectively. By utilizing our newly collected large dataset for training, our Co-CNN can achieve 85:36% in F-1 score.

  3. Applications of convolution voltammetry in electroanalytical chemistry.

    PubMed

    Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie

    2014-02-18

    The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.

  4. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  5. Fast Convolution Algorithms and Associated VHSIC Architectures.

    DTIC Science & Technology

    1983-05-23

    Idenftfy by block number) Finite field, Mersenne prime , Fermat number, primitive element, number- theoretic transform, cyclic convolution, polynomial...elements of order 2 P+p and 2k n in the finite field GF(q 2), where q = 2P-l is a Mersenne prime , p is a prime number, and n is a divisor of 2pl...Abstract - A high-radix f.f.t. algorithm for computing transforms over GF(q2), where q is a Mersenne prime , is developed to implement fast circular

  6. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-01-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.

  7. Dynamic modelling of a double-pendulum gantry crane system incorporating payload

    SciTech Connect

    Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.

    2011-06-20

    The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.

  8. Dynamic Modelling of a Double-Pendulum Gantry Crane System Incorporating Payload

    NASA Astrophysics Data System (ADS)

    Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.

    2011-06-01

    The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.

  9. Simulations of the flow past a cylinder using an unsteady double wake model

    SciTech Connect

    Ramos-García, N.; Sarlak, H.; Andersen, S. J.; Sørensen, J. N.

    2016-06-08

    In the present work, the in-house UnSteady Double Wake Model (USDWM) is used to simulate flows past a cylinder at subcritical, supercritical, and transcritical Reynolds numbers. The flow model is a two-dimensional panel method which uses the unsteady double wake technique to model flow separation and its dynamics. In the present work the separation location is obtained from experimental data and fixed in time. The highly unsteady flow field behind the cylinder is analyzed in detail, comparing the vortex shedding charactericts under the different flow conditions.

  10. Terahertz double-exponential model for adsorption of volatile organic compounds in active carbon

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Zhan, Honglei; Miao, Xinyang; Zhao, Kun; Zhou, Qiong

    2017-06-01

    In terms of the evaluation of the diffusion-controlled adsorption and diffused rate, a mathematical model was built on the basis of the double-exponential kinetics model and terahertz amplitude in this letter. The double-exponential-THz model described the two-step mechanism controlled by diffusion. A rapid step involves external and internal diffusion, followed by a slow step controlled by intraparticle diffusion. The concentration gradient of the molecules promoted the organic molecules rapidly diffusing to the external surface of adsorbent. The solute molecules then transferred across the liquid film. Intraparticle diffusion began and was determined by the molecular sizes, as well as affinities between organics and activated carbon.

  11. Neutrinoless Double Beta Nuclear Matrix Elements Around Mass 80 in the Nuclear Shell Model

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Naotaka; Higashiyama, Koji; Taguchi, Daisuke; Teruya, Eri

    The observation of the neutrinoless double-beta decay can determine whether the neutrino is a Majorana particle or not. In its theoretical nuclear side it is particularly important to estimate three types of nuclear matrix elements, namely, Fermi (F), Gamow-Teller (GT), and tensor (T) types matrix elements. The shell model calculations and also the pair-truncated shell model calculations are carried out to check the model dependence on nuclear matrix elements. In this work the neutrinoless double-beta decay for mass A = 82 nuclei is studied. It is found that the matrix elements are quite sensitive to the ground state wavefunctions.

  12. Towards Better Analysis of Deep Convolutional Neural Networks.

    PubMed

    Liu, Mengchen; Shi, Jiaxin; Li, Zhen; Li, Chongxuan; Zhu, Jun; Liu, Shixia

    2017-01-01

    Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.

  13. Double time lag combustion instability model for bipropellant rocket engines

    NASA Technical Reports Server (NTRS)

    Liu, C. K.

    1973-01-01

    A bipropellant stability model is presented in which feed system inertance and capacitance are treated along with injection pressure drop and distinctly different propellant time lags. The model is essentially an extension of Crocco's and Cheng's monopropellant model to the bipropellant case assuming that the feed system inertance and capacitance along with the resistance are located at the injector. The neutral stability boundaries are computed in terms of these parameters to demonstrate the interaction among them.

  14. Molecular modeling of layered double hydroxide intercalated with benzoate, modeling and experiment.

    PubMed

    Kovár, Petr; Pospísil, M; Nocchetti, M; Capková, P; Melánová, Klára

    2007-08-01

    The structure of Zn4Al2 Layered Double Hydroxide intercalated with benzencarboxylate (C6H5COO-) was solved using molecular modeling combined with experiment (X-ray powder diffraction, IR spectroscopy, TG measurements). Molecular modeling revealed the arrangement of guest molecules, layer stacking, water content and water location in the interlayer space of the host structure. Molecular modeling using empirical force field was carried out in Cerius(2) modeling environment. Results of modeling were confronted with experiment that means comparing the calculated and measured diffraction pattern and comparing the calculated water content with the thermogravimetric value. Good agreement has been achieved between calculated and measured basal spacing: d(calc) = 15.3 A and d(exp) = 15.5 A. The number of water molecules per formula unit (6H2O per Zn4Al2(OH)12) obtained by modeling (i.e., corresponding to the energy minimum) agrees with the water content estimated by thermogravimetry. The long axis of guest molecules are almost perpendicular to the LDH layers, anchored to the host layers via COO- groups. Mutual orientation of benzoate ring planes in the interlayer space keeps the parquet arrangement. Water molecules are roughly arranged in planes adjacent to host layers together with COO- groups.

  15. A test of the double-shearing model of flow for granular materials

    USGS Publications Warehouse

    Savage, J.C.; Lockner, D.A.

    1997-01-01

    The double-shearing model of flow attributes plastic deformation in granular materials to cooperative slip on conjugate Coulomb shears (surfaces upon which the Coulomb yield condition is satisfied). The strict formulation of the double-shearing model then requires that the slip lines in the material coincide with the Coulomb shears. Three different experiments that approximate simple shear deformation in granular media appear to be inconsistent with this strict formulation. For example, the orientation of the principal stress axes in a layer of sand driven in steady, simple shear was measured subject to the assumption that the Coulomb failure criterion was satisfied on some surfaces (orientation unspecified) within the sand layer. The orientation of the inferred principal compressive axis was then compared with the orientations predicted by the double-shearing model. The strict formulation of the model [Spencer, 1982] predicts that the principal stress axes should rotate in a sense opposite to that inferred from the experiments. A less restrictive formulation of the double-shearing model by de Josselin de Jong [1971] does not completely specify the solution but does prescribe limits on the possible orientations of the principal stress axes. The orientations of the principal compression axis inferred from the experiments are probably within those limits. An elastoplastic formulation of the double-shearing model [de Josselin de Jong, 1988] is reasonably consistent with the experiments, although quantitative agreement was not attained. Thus we conclude that the double-shearing model may be a viable law to describe deformation of granular materials, but the macroscopic slip surfaces will not in general coincide with the Coulomb shears.

  16. Toward understanding the double Intertropical Convergence Zone pathology in coupled ocean-atmosphere general circulation models

    NASA Astrophysics Data System (ADS)

    Zhang, Xuehong; Lin, Wuyin; Zhang, Minghua

    2007-06-01

    This paper first analyzes structures of the double Intertropical Convergence Zone (ITCZ) in the central equatorial Pacific simulated by three coupled ocean-atmosphere general circulation models in terms of sea surface temperatures, surface precipitation, and surface winds. It then describes the projection of the double ITCZ in the equatorial upper ocean. It is shown that the surface wind convergences, associated with the zonally oriented double rainbands on both sides of the equator, also correspond to surface wind curls that are favorable to Ekman pumping immediately poleward of the rainbands. The pumping results in a thermocline ridge south of the equator in the central equatorial Pacific, causing a significant overestimation of the eastward South Equatorial Counter Current that advects warm water eastward. A positive feedback mechanism is then described for the amplification of the double ITCZ in the coupled models from initial biases in stand-alone atmospheric models through the following chain of interactions: precipitation (atmospheric latent heating), surface wind convergences, surface wind curls, Ekman pumping, South Equatorial Counter Current, and eastward advection of ocean temperature. This pathology provides a possible means to address the longstanding double ITCZ problem in coupled models.

  17. Haag duality for Kitaev’s quantum double model for abelian groups

    NASA Astrophysics Data System (ADS)

    Fiedler, Leander; Naaijkens, Pieter

    2015-11-01

    We prove Haag duality for cone-like regions in the ground state representation corresponding to the translational invariant ground state of Kitaev’s quantum double model for finite abelian groups. This property says that if an observable commutes with all observables localized outside the cone region, it actually is an element of the von Neumann algebra generated by the local observables inside the cone. This strengthens locality, which says that observables localized in disjoint regions commute. As an application, we consider the superselection structure of the quantum double model for abelian groups on an infinite lattice in the spirit of the Doplicher-Haag-Roberts program in algebraic quantum field theory. We find that, as is the case for the toric code model on an infinite lattice, the superselection structure is given by the category of irreducible representations of the quantum double.

  18. Double and single pion photoproduction within a dynamical coupled-channels model

    SciTech Connect

    Hiroyuki Kamano; Julia-Diaz, Bruno; Lee, T. -S. H.; Matsuyama, Akihiko; Sato, Toru

    2009-12-16

    Within a dynamical coupled-channels model which has already been fixed from analyzing the data of the πN → πN and γN → πN reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W < 1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects due to the direct γN → ππN mechanism, the interplay between the resonant and non-resonant amplitudes, and the coupled-channels effects. As a result, the model parameters which can be determined most effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.

  19. Double and single pion photoproduction within a dynamical coupled-channels model

    DOE PAGES

    Hiroyuki Kamano; Julia-Diaz, Bruno; Lee, T. -S. H.; ...

    2009-12-16

    Within a dynamical coupled-channels model which has already been fixed from analyzing the data of the πN → πN and γN → πN reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W < 1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects due to the direct γN → ππN mechanism, the interplay between the resonant and non-resonant amplitudes, and the coupled-channels effects. As a result, the model parameters which can be determined mostmore » effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.« less

  20. Convolutional fountain distribution over fading wireless channels

    NASA Astrophysics Data System (ADS)

    Usman, Mohammed

    2012-08-01

    Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.

  1. Convolution formulations for non-negative intensity.

    PubMed

    Williams, Earl G

    2013-08-01

    Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space.

  2. NUCLEI SEGMENTATION VIA SPARSITY CONSTRAINED CONVOLUTIONAL REGRESSION

    PubMed Central

    Zhou, Yin; Chang, Hang; Barner, Kenneth E.; Parvin, Bahram

    2017-01-01

    Automated profiling of nuclear architecture, in histology sections, can potentially help predict the clinical outcomes. However, the task is challenging as a result of nuclear pleomorphism and cellular states (e.g., cell fate, cell cycle), which are compounded by the batch effect (e.g., variations in fixation and staining). Present methods, for nuclear segmentation, are based on human-designed features that may not effectively capture intrinsic nuclear architecture. In this paper, we propose a novel approach, called sparsity constrained convolutional regression (SCCR), for nuclei segmentation. Specifically, given raw image patches and the corresponding annotated binary masks, our algorithm jointly learns a bank of convolutional filters and a sparse linear regressor, where the former is used for feature extraction, and the latter aims to produce a likelihood for each pixel being nuclear region or background. During classification, the pixel label is simply determined by a thresholding operation applied on the likelihood map. The method has been evaluated using the benchmark dataset collected from The Cancer Genome Atlas (TCGA). Experimental results demonstrate that our method outperforms traditional nuclei segmentation algorithms and is able to achieve competitive performance compared to the state-of-the-art algorithm built upon human-designed features with biological prior knowledge. PMID:28101301

  3. Convolution Inequalities for the Boltzmann Collision Operator

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.

    2010-09-01

    We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.

  4. Finite Element Modeling and Exploration of Double Hearing Protection Systems

    DTIC Science & Technology

    2006-02-10

    broad frequency range were determined from this method. The elastomeric rubber material was cut into small wafers of 2 to 5mm thickness. A mass was... material (being 0.1 for soft elastomeric foams), G and E are the shear and elastic moduli of the material , respectively, D is the diameter of the...and to investigate the behavior of the modeled system. The foam earplug material properties for the finite element model are required in the same shear

  5. New quantum MDS-convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Li, Fengwei; Yue, Qin

    2015-12-01

    In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.

  6. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  7. Communication — Modeling polymer-electrolyte fuel-cell agglomerates with double-trap kinetics

    DOE PAGES

    Pant, Lalit M.; Weber, Adam Z.

    2017-04-14

    A new semi-analytical agglomerate model is presented for polymer-electrolyte fuel-cell cathodes. The model uses double-trap kinetics for the oxygen-reduction reaction, which can capture the observed potential-dependent coverage and Tafel-slope changes. An iterative semi-analytical approach is used to obtain reaction rate constants from the double-trap kinetics, oxygen concentration at the agglomerate surface, and overall agglomerate reaction rate. The analytical method can predict reaction rates within 2% of the numerically simulated values for a wide range of oxygen concentrations, overpotentials, and agglomerate sizes, while saving simulation time compared to a fully numerical approach.

  8. A SPICE model of double-sided Si microstrip detectors

    SciTech Connect

    Candelori, A.; Paccagnella, A. |; Bonin, F.

    1996-12-31

    We have developed a SPICE model for the ohmic side of AC-coupled Si microstrip detectors with interstrip isolation via field plates. The interstrip isolation has been measured in various conditions by varying the field plate voltage. Simulations have been compared with experimental data in order to determine the values of the model parameters for different voltages applied to the field plates. The model is able to predict correctly the frequency dependence of the coupling between adjacent strips. Furthermore, we have used such model for the study of the signal propagation along the detector when a current signal is injected in a strip. Only electrical coupling is considered here, without any contribution due to charge sharing derived from carrier diffusion. For this purpose, the AC pads of the strips have been connected to a read-out electronics and the current signal has been injected into a DC pad. Good agreement between measurements and simulations has been reached for the central strip and the first neighbors. Experimental tests and computer simulations have been performed for four different strip and field plate layouts, in order to investigate how the detector geometry affects the parameters of the SPICE model and the signal propagation.

  9. Learning Building Extraction in Aerial Scenes with Convolutional Networks.

    PubMed

    Yuan, Jiangye

    2017-09-11

    Extracting buildings from aerial scene images is an important task with many applications. However, this task is highly difficult to automate due to extremely large variations of building appearances, and still heavily relies on manual work. To attack this problem, we design a deep convolutional network with a simple structure that integrates activation from multiple layers for pixel-wise prediction, and introduce the signed distance function of building boundaries as the output representation, which has an enhanced representation power. To train the network, we leverage abundant building footprint data from geographic information systems (GIS) to generate large amounts of labeled data. The trained model achieves a superior performance on datasets that are significantly larger and more complex than those used in prior work, demonstrating that the proposed method provides a promising and scalable solution for automating this labor-intensive task.

  10. A shallow convolutional neural network for blind image sharpness assessment.

    PubMed

    Yu, Shaode; Wu, Shibin; Wang, Lei; Jiang, Fan; Xie, Yaoqin; Li, Leida

    2017-01-01

    Blind image quality assessment can be modeled as feature extraction followed by score prediction. It necessitates considerable expertise and efforts to handcraft features for optimal representation of perceptual image quality. This paper addresses blind image sharpness assessment by using a shallow convolutional neural network (CNN). The network takes single feature layer to unearth intrinsic features for image sharpness representation and utilizes multilayer perceptron (MLP) to rate image quality. Different from traditional methods, CNN integrates feature extraction and score prediction into an optimization procedure and retrieves features automatically from raw images. Moreover, its prediction performance can be enhanced by replacing MLP with general regression neural network (GRNN) and support vector regression (SVR). Experiments on Gaussian blur images from LIVE-II, CSIQ, TID2008 and TID2013 demonstrate that CNN features with SVR achieves the best overall performance, indicating high correlation with human subjective judgment.

  11. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  12. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  13. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  14. Spatial convolution for mirror image suppression in Fourier domain optical coherence tomography.

    PubMed

    Zhang, Miao; Ma, Lixin; Yu, Ping

    2017-02-01

    We developed a spatial convolution approach for mirror image suppression in phase-modulated Fourier domain optical coherence tomography, and demonstrated it in vivo for small animal imaging. Utilizing the correlation among neighboring A-scans, the mirror image suppression process was simplified to a three-parameter convolution. By adjusting the three parameters, we can implement different Fourier domain sideband windows, which is important but complicated in existing approaches. By properly selecting the window size, we validated the spatial convolution approach on both simulated and experimental data, and showed that it is versatile, fast, and effective. The new approach reduced the computational cost by 32% and improved the mirror image suppression by 10%. We adapted the spatial convolution approach to a GPU accelerated system for ultrahigh-speed processing in 0.1 ms. The advantage of the ultrahigh speed was demonstrated in vivo for small animal imaging in a mouse model. The fast scanning and processing speed removed respiratory motion artifacts in the in vivo imaging.

  15. Modeling the drain current and its equation parameters for lightly doped symmetrical double-gate MOSFETs

    NASA Astrophysics Data System (ADS)

    Bhartia, Mini; Chatterjee, Arun Kumar

    2015-04-01

    A 2D model for the potential distribution in silicon film is derived for a symmetrical double gate MOSFET in weak inversion. This 2D potential distribution model is used to analytically derive an expression for the subthreshold slope and threshold voltage. A drain current model for lightly doped symmetrical DG MOSFETs is then presented by considering weak and strong inversion regions including short channel effects, series source to drain resistance and channel length modulation parameters. These derived models are compared with the simulation results of the SILVACO (Atlas) tool for different channel lengths and silicon film thicknesses. Lastly, the effect of the fixed oxide charge on the drain current model has been studied through simulation. It is observed that the obtained analytical models of symmetrical double gate MOSFETs are in good agreement with the simulated results for a channel length to silicon film thickness ratio greater than or equal to 2.

  16. Spiral to ferromagnetic transition in a Kondo lattice model with a double-well potential

    NASA Astrophysics Data System (ADS)

    Caro, R. C.; Franco, R.; Silva-Valencia, J.

    2016-02-01

    Using the density matrix renormalization group method, we study a system of 171Yb atoms confined in a one-dimensional optical lattice. The atoms in the 1So state undergo a double-well potential, whereas the atoms in the 3P0 state are localized. This system is modelled by the Kondo lattice model plus a double-well potential for the free carries. We obtain phase diagrams composed of ferromagnetic and spiral phases, where the critical points always increase with the interwell tunneling parameter. We conclude that this quantum phase transition can be tuned by the double-well potential parameters as well as by the common parameters: local coupling and density.

  17. Macro-modelling of a double-gimballed electrostatic torsional micromirror

    NASA Astrophysics Data System (ADS)

    Zhou, Guangya; Tay, Francis E. H.; Chau, Fook Siong

    2003-09-01

    This paper presents the development of a reduced-order macro-model for the double-gimballed electrostatic torsional micromirror using the hierarchical circuit-based approach. The proposed macro-model permits extremely fast simulation while providing nearly FEM accuracy. The macro-model is coded in MAST analog hardware description language (AHDL), and the simulations are implemented in the SABERTM simulator. Both the static and dynamic behaviour of the double-gimballed electrostatic torsional micromirror have been investigated. The dc and frequency analysis results obtained by the proposed macro-model are in good agreement with CoventorWareTM finite element analysis results. Based on the macro-model we developed, system-level simulation of a closed-loop controlled double-gimballed torsional micromirror is also performed. Decentralized PID controllers are proposed for the control of the micromirror. A sequential-loop-closing method is used for tuning the multiple control loops during the simulation. After tuning, the closed-loop controlled double-gimballed torsional micromirror demonstrates an improved transient performance and satisfactory disturbance rejection ability.

  18. Neutrinoless double beta nuclear matrix elements around mass 80 in the nuclear shell-model

    NASA Astrophysics Data System (ADS)

    Yoshinaga, N.; Higashiyama, K.; Taguchi, D.; Teruya, E.

    2015-05-01

    The observation of the neutrinoless double-beta decay can determine whether the neutrino is a Majorana particle or not. For theoretical nuclear physics it is particularly important to estimate three types of matrix elements, namely Fermi (F), Gamow-Teller (GT), and tensor (T) matrix elements. In this paper, we carry out shell-model calculations and also pair-truncated shell-model calculations to check the model dependence in the case of mass A=82 nuclei.

  19. Two-dimensional models of threshold voltage and subthreshold current for symmetrical double-material double-gate strained Si MOSFETs

    NASA Astrophysics Data System (ADS)

    Yan-hui, Xin; Sheng, Yuan; Ming-tang, Liu; Hong-xia, Liu; He-cai, Yuan

    2016-03-01

    The two-dimensional models for symmetrical double-material double-gate (DM-DG) strained Si (s-Si) metal-oxide semiconductor field effect transistors (MOSFETs) are presented. The surface potential and the surface electric field expressions have been obtained by solving Poisson’s equation. The models of threshold voltage and subthreshold current are obtained based on the surface potential expression. The surface potential and the surface electric field are compared with those of single-material double-gate (SM-DG) MOSFETs. The effects of different device parameters on the threshold voltage and the subthreshold current are demonstrated. The analytical models give deep insight into the device parameters design. The analytical results obtained from the proposed models show good matching with the simulation results using DESSIS. Project supported by the National Natural Science Foundation of China (Grant Nos. 61376099, 11235008, and 61205003).

  20. Period-doubling bifurcation and high-order resonances in RR Lyrae hydrodynamical models

    NASA Astrophysics Data System (ADS)

    Kolláth, Z.; Molnár, L.; Szabó, R.

    2011-06-01

    We investigated period doubling, a well-known phenomenon in dynamical systems, for the first time in RR Lyrae models. These studies provide theoretical background for the recent discovery of period doubling in some Blazhko RR Lyrae stars with the Kepler space telescope. Since period doubling has been observed only in Blazhko-modulated stars so far, the phenomenon can help in understanding the modulation as well. Utilizing the Florida-Budapest turbulent convective hydrodynamical code, we have identified the phenomenon in both radiative and convective models. A period-doubling cascade was also followed up to an eight-period solution, confirming that destabilization of the limit cycle is indeed the underlying phenomenon. Floquet stability roots were calculated to investigate the possible causes and occurrences of the phenomenon. A two-dimensional diagnostic diagram was constructed to illustrate the various resonances between the fundamental mode and the different overtones. Combining the two tools, we confirmed that the period-doubling instability is caused by a 9:2 resonance between the ninth overtone and the fundamental mode. Destabilization of the limit cycle by a resonance of a high-order mode is possible because the overtone is a strange mode. The resonance is found to be strong enough to shift the period of overtone by up to 10 per cent. Our investigations suggest that a more complex interplay of radial (and presumably non-radial) modes could happen in RR Lyrae stars that might have connections with the Blazhko effect as well.

  1. Models for 60 double-lined binaries containing giants

    NASA Astrophysics Data System (ADS)

    Eggleton, Peter P.; Yakut, Kadri

    2017-07-01

    The observed masses, radii and temperatures of 60 medium- to long-period binaries, most of which contain a cool, evolved star and a hotter less evolved one, are compared with theoretical models which include (a) core convective overshooting, (b) mass-loss, possibly driven by dynamo action as in RS CVn binaries, and (c) tidal friction, including its effect on orbital period through magnetic braking. A reasonable fit is found in about 42 cases, but in 11 other cases the primaries appear to have lost either more mass or less mass than the models predict, and in 4 others the orbit is predicted to be either more or less circular than observed. Of the remaining three systems, two (γ Per and HR 8242) have a markedly 'overevolved' secondary, our explanation being that the primary component is the merged remnant of a former short-period sub-binary in a former triple system. The last system (V695 Cyg) defies any agreement at present. Mention is also made of three other systems (V643 Ori, OW Gem and V453 Cep), which are relevant to our discussion.

  2. A bilayer Double Semion model with symmetry-enriched topological order

    SciTech Connect

    Ortiz, L. Martin-Delgado, M.A.

    2016-12-15

    We construct a new model of two-dimensional quantum spin systems that combines intrinsic topological orders and a global symmetry called flavour symmetry. It is referred as the bilayer Doubled Semion model (bDS) and is an instance of symmetry-enriched topological order. A honeycomb bilayer lattice is introduced to combine a Double Semion Topological Order with a global spin–flavour symmetry to get the fractionalization of its quasiparticles. The bDS model exhibits non-trivial braiding self-statistics of excitations and its dual model constitutes a Symmetry-Protected Topological Order with novel edge states. This dual model gives rise to a bilayer Non-Trivial Paramagnet that is invariant under the flavour symmetry and the well-known spin flip symmetry.

  3. Family Stress and Adaptation to Crises: A Double ABCX Model of Family Behavior.

    ERIC Educational Resources Information Center

    McCubbin, Hamilton I.; Patterson, Joan M.

    Recent developments in family stress and coping research and a review of data and observations of families in a war-induced crisis situation led to an investigation of the relationship between a stressor and family outcomes. The study, based on the Double ABCX Model in which A (the stressor event) interacts with B (the family's crisis-meeting…

  4. Creating a Double-Spring Model to Teach Chromosome Movement during Mitosis & Meiosis

    ERIC Educational Resources Information Center

    Luo, Peigao

    2012-01-01

    The comprehension of chromosome movement during mitosis and meiosis is essential for understanding genetic transmission, but students often find this process difficult to grasp in a classroom setting. I propose a "double-spring model" that incorporates a physical demonstration and can be used as a teaching tool to help students understand this…

  5. Creating a Double-Spring Model to Teach Chromosome Movement during Mitosis & Meiosis

    ERIC Educational Resources Information Center

    Luo, Peigao

    2012-01-01

    The comprehension of chromosome movement during mitosis and meiosis is essential for understanding genetic transmission, but students often find this process difficult to grasp in a classroom setting. I propose a "double-spring model" that incorporates a physical demonstration and can be used as a teaching tool to help students understand this…

  6. Robust smile detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  7. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  8. Convolutional neural network for pottery retrieval

    NASA Astrophysics Data System (ADS)

    Benhabiles, Halim; Tabia, Hedi

    2017-01-01

    The effectiveness of the convolutional neural network (CNN) has already been demonstrated in many challenging tasks of computer vision, such as image retrieval, action recognition, and object classification. This paper specifically exploits CNN to design local descriptors for content-based retrieval of complete or nearly complete three-dimensional (3-D) vessel replicas. Based on vector quantization, the designed descriptors are clustered to form a shape vocabulary. Then, each 3-D object is associated to a set of clusters (words) in that vocabulary. Finally, a weighted vector counting the occurrences of every word is computed. The reported experimental results on the 3-D pottery benchmark show the superior performance of the proposed method.

  9. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  10. Ergodic Transition in a Simple Model of the Continuous Double Auction

    PubMed Central

    Radivojević, Tijana; Anselmi, Jonatha; Scalas, Enrico

    2014-01-01

    We study a phenomenological model for the continuous double auction, whose aggregate order process is equivalent to two independent queues. The continuous double auction defines a continuous-time random walk for trade prices. The conditions for ergodicity of the auction are derived and, as a consequence, three possible regimes in the behavior of prices and logarithmic returns are observed. In the ergodic regime, prices are unstable and one can observe a heteroskedastic behavior in the logarithmic returns. On the contrary, non-ergodicity triggers stability of prices, even if two different regimes can be seen. PMID:24558377

  11. Double Higgs production in the Two Higgs Doublet Model at the linear collider

    SciTech Connect

    Arhrib, Abdesslam; Benbrik, Rachid; Chiang, C.-W.

    2008-04-21

    We study double Higgs-strahlung production at the future Linear Collider in the framework of the Two Higgs Doublet Models through the following channels: e{sup +}e{sup -}{yields}{phi}{sub i}{phi}{sub j}Z, {phi}{sub i} = h deg., H deg., A deg. All these processes are sensitive to triple Higgs couplings. Hence observations of them provide information on the triple Higgs couplings that help reconstructing the scalar potential. We discuss also the double Higgs-strahlung e{sup +}e{sup -}{yields}h deg. h deg. Z in the decoupling limit where h deg. mimics the SM Higgs boson.

  12. The double-gradient model of flapping instability with oblique wave vector

    NASA Astrophysics Data System (ADS)

    Korovinskiy, Daniil; Kiehas, Stefan

    2017-04-01

    The double-gradient model of magnetotail flapping oscillations/instability is generalized for the case of oblique propagation in the equatorial plane. The transversal direction Y (in GSM reference system) of the wave vector is found to be preferable, showing the highest growth rates of kink and sausage double-gradient unstable modes. Growth rates decrease with the wave vector rotating toward the X direction. It is found that neither waves nor instability with a wave vector pointing toward the Earth/magnetotail can develop.

  13. Effects of Convoluted Divergent Flap Contouring on the Performance of a Fixed-Geometry Nonaxisymmetric Exhaust Nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.; Hunter, Craig A.

    1999-01-01

    An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.

  14. Modelling and control of double-cone dielectric elastomer actuator

    NASA Astrophysics Data System (ADS)

    Branz, F.; Francesconi, A.

    2016-09-01

    Among various dielectric elastomer devices, cone actuators are of large interest for their multi-degree-of-freedom design. These objects combine the common advantages of dielectric elastomers (i.e. solid-state actuation, self-sensing capability, high conversion efficiency, light weight and low cost) with the possibility to actuate more than one degree of freedom in a single device. The potential applications of this feature in robotics are huge, making cone actuators very attractive. This work focuses on rotational degrees of freedom to complete existing literature and improve the understanding of such aspect. Simple tools are presented for the performance prediction of the device: finite element method simulations and interpolating relations have been used to assess the actuator steady-state behaviour in terms of torque and rotation as a function of geometric parameters. Results are interpolated by fit relations accounting for all the relevant parameters. The obtained data are validated through comparison with experimental results: steady-state torque and rotation are determined at a given high voltage actuation. In addition, the transient response to step input has been measured and, as a result, the voltage-to-torque and the voltage-to-rotation transfer functions are obtained. Experimental data are collected and used to validate the prediction capability of the transfer function in terms of time response to step input and frequency response. The developed static and dynamic models have been employed to implement a feedback compensator that controls the device motion; the simulated behaviour is compared to experimental data, resulting in a maximum prediction error of 7.5%.

  15. SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters

    NASA Astrophysics Data System (ADS)

    Hui, Kerwin; Chai, Jeng-Da

    2016-01-01

    By incorporating the nonempirical strongly constrained and appropriately normed (SCAN) semilocal density functional [J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction problems and noncovalent interactions. In particular, SCAN0-2, which includes about 79% of Hartree-Fock exchange and 50% of second-order Møller-Plesset correlation, is shown to be reliably accurate for a very diverse range of applications, such as thermochemistry, kinetics, noncovalent interactions, and self-interaction problems.

  16. A diabatic state model for double proton transfer in hydrogen bonded complexes.

    PubMed

    McKenzie, Ross H

    2014-09-14

    Four diabatic states are used to construct a simple model for double proton transfer in hydrogen bonded complexes. Key parameters in the model are the proton donor-acceptor separation R and the ratio, D1/D2, between the proton affinity of a donor with one and two protons. Depending on the values of these two parameters the model describes four qualitatively different ground state potential energy surfaces, having zero, one, two, or four saddle points. Only for the latter are there four stable tautomers. In the limit D2 = D1 the model reduces to two decoupled hydrogen bonds. As R decreases a transition can occur from a synchronous concerted to an asynchronous concerted to a sequential mechanism for double proton transfer.

  17. Double Higgs production at LHC, see-saw type-II and Georgi-Machacek model

    SciTech Connect

    Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.

    2015-03-15

    The double Higgs production in the models with isospin-triplet scalars is studied. It is shown that in the see-saw type-II model, the mode with an intermediate heavy scalar, pp → H + X → 2h + X, may have the cross section that is comparable with that in the Standard Model. In the Georgi-Machacek model, this cross section could be much larger than in the Standard Model because the vacuum expectation value of the triplet can be large.

  18. Image quality of mixed convolution kernel in thoracic computed tomography

    PubMed Central

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-01-01

    Abstract The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images. Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test. Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001). The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT. PMID:27858910

  19. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  20. Parallel double-plate capacitive proximity sensor modelling based on effective theory

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zhu, Haiye; Wang, Wenyu; Gong, Yu

    2014-02-01

    A semi-analytical model for a double-plate capacitive proximity sensor is presented according to the effective theory. Three physical models are established to derive the final equation of the sensor. Measured data are used to determine the coefficients. The final equation is verified by using measured data. The average relative error of the calculated and the measured sensor capacitance is less than 7.5%. The equation can be used to provide guidance to engineering design of the proximity sensors.

  1. Extended Holography: Double-Trace Deformation and Brane-Induced Gravity Models

    NASA Astrophysics Data System (ADS)

    Barvinsky, A. O.

    2017-03-01

    We put forward a conjecture that for a special class of models - models of the double-trace deformation and brane-induced gravity types - the principle of holographic dualitiy can be extended beyond conformal invariance and anti-de Sitter (AdS) isometry. Such an extension is based on a special relation between functional determinants of the operators acting in the bulk and on the boundary.

  2. Extended Holography: Double-Trace Deformation and Brane-Induced Gravity Models

    NASA Astrophysics Data System (ADS)

    Barvinsky, A. O.

    2017-03-01

    We put forward a conjecture that for a special class of models - models of the double-trace deformation and brane-induced gravity types - the principle of holographic dualitiy can be extended beyond conformal invariance and anti-de Sitter (AdS) isometry. Such an extension is based on a special relation between functional determinants of the operators acting in the bulk and on the boundary.

  3. South Asian summer monsoon variability in a model with doubled atmospheric carbon dioxide concentration

    SciTech Connect

    Meehl, G.A.; Washington, W.M. )

    1993-05-21

    Doubled atmospheric carbon dioxide concentration in a global coupled ocean-atmosphere climate model produced increased surface temperatures and evaporation and greater mean precipitation in the south Asian summer monsoon region. As a partial consequence, interannual variability of area-averaged monsoon rainfall was enhanced. Consistent with the climate sensitivity results from the model, observations showed a trend of increased interannual variability of Indian monsoon precipitation associated with warmer land and ocean temperatures in the monsoon region. 26 refs., 3 figs., 1 tab.

  4. Convolution kernel design and efficient algorithm for sampling density correction.

    PubMed

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  5. Modeling sorption of divalent metal cations on hydrous manganese oxide using the diffuse double layer model

    USGS Publications Warehouse

    Tonkin, J.W.; Balistrieri, L.S.; Murray, J.W.

    2004-01-01

    Manganese oxides are important scavengers of trace metals and other contaminants in the environment. The inclusion of Mn oxides in predictive models, however, has been difficult due to the lack of a comprehensive set of sorption reactions consistent with a given surface complexation model (SCM), and the discrepancies between published sorption data and predictions using the available models. The authors have compiled a set of surface complexation reactions for synthetic hydrous Mn oxide (HMO) using a two surface site model and the diffuse double layer SCM which complements databases developed for hydrous Fe (III) oxide, goethite and crystalline Al oxide. This compilation encompasses a range of data observed in the literature for the complex HMO surface and provides an error envelope for predictions not well defined by fitting parameters for single or limited data sets. Data describing surface characteristics and cation sorption were compiled from the literature for the synthetic HMO phases birnessite, vernadite and ??-MnO2. A specific surface area of 746 m2g-1 and a surface site density of 2.1 mmol g-1 were determined from crystallographic data and considered fixed parameters in the model. Potentiometric titration data sets were adjusted to a pH1EP value of 2.2. Two site types (???XOH and ???YOH) were used. The fraction of total sites attributed to ???XOH (??) and pKa2 were optimized for each of 7 published potentiometric titration data sets using the computer program FITEQL3.2. pKa2 values of 2.35??0.077 (???XOH) and 6.06??0.040 (???YOH) were determined at the 95% confidence level. The calculated average ?? value was 0.64, with high and low values ranging from 1.0 to 0.24, respectively. pKa2 and ?? values and published cation sorption data were used subsequently to determine equilibrium surface complexation constants for Ba2+, Ca2+, Cd 2+, Co2+, Cu2+, Mg2+, Mn 2+, Ni2+, Pb2+, Sr2+ and Zn 2+. In addition, average model parameters were used to predict additional

  6. Theoretical modeling of the dynamics of a semiconductor laser subject to double-reflector optical feedback

    SciTech Connect

    Bakry, A.; Abdulrhmann, S.; Ahmed, M.

    2016-06-15

    We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.

  7. Theoretical modeling of the dynamics of a semiconductor laser subject to double-reflector optical feedback

    NASA Astrophysics Data System (ADS)

    Bakry, A.; Abdulrhmann, S.; Ahmed, M.

    2016-06-01

    We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.

  8. Cascading failures coupled model of interdependent double layered public transit network

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Fu, Bai-Bai; Li, Shu-Bin

    2016-06-01

    Taking urban public transit network as research perspective, this work introduces the influences of adjacent stations on definition of station initial load, the connected edge transit capacity, and the coupled capacity to modify traditional load-capacity cascading failures (CFs) model. Furthermore, we consider the coupled effect of lower layered public transit network on the CFs of upper layered public transit network, and construct CFs coupled model of double layered public transit network with “interdependent relationship”. Finally, taking Jinan city’s public transit network as example, we give the dynamics simulation analysis of CFs under different control parameters based on measurement indicator of station cascading failures ratio (abbreviated as CF) and the scale of time-step cascading failures (abbreviated as TCFl), get the influencing characteristics of various control parameters, and verify the feasibility of CFs coupled model of double layered public transit network.

  9. Double-wire sternal closure technique in bovine animal models for total artificial heart implant.

    PubMed

    Karimov, Jamshid H; Sunagawa, Gengo; Golding, Leonard A R; Moazami, Nader; Fukamachi, Kiyotaka

    2015-08-01

    In vivo preclinical testing of mechanical circulatory devices requires large animal models that provide reliable physiological and hemodynamic conditions by which to test the device and investigate design and development strategies. Large bovine species are commonly used for mechanical circulatory support device research. The animals used for chronic in vivo support require high-quality care and excellent surgical techniques as well as advanced methods of postoperative care. These techniques are constantly being updated and new methods are emerging.We report results of our double steel-wire closure technique in large bovine models used for Cleveland Clinic's continuous-flow total artificial heart development program. This is the first report of double-wire sternal fixation used in large bovine models.

  10. Critical and crossover behavior in the double-Gaussian model on a lattice

    NASA Astrophysics Data System (ADS)

    Baker, George A., Jr.; Bishop, A. R.; Fesser, K.; Beale, Paul D.; Krumhansl, J. A.

    1982-09-01

    The double-Gaussian model, as recently introduced by Baker and Bishop, is studied in the context of a lattice-dynamics Hamiltonian belonging to the familiar φ4 class. Advantage is taken of the partition-function factorability (into Ising and Gaussian components) to place bounds on the Ising-class critical temperature for various lattice dimensions and all degrees of displaciveness in the bare Hamiltonian. Further, a simple criterion for a noncritical and nonuniversal crossover from order-disorder to Gaussian behavior is evaluated in numerical detail. In one and two dimensions these critical and crossover properties are compared with predictions based on real-space decimation renormalization-group flows, as previously exploited in the φ4 model by Beale et al. The double-Gaussian model again introduces some unique analytical advantages.

  11. Critical and crossover behavior in the double Gaussian model on a lattice

    SciTech Connect

    Baker, G.A. Jr.; Bishop, A.R.; Fesser, K.; Beale, P.D.; Krumhansl, J.A.

    1982-09-01

    The-double-Gaussian model, as recently introduced by Baker and Bishop, is studied in the context of a lattice-dynamics Hamiltonian belonging to the familiar phi/sup 4/ class. Advantage is taken of the partition-function factorability (into Ising and Gaussian components) to place bounds on the Ising-class critical temperature for various lattice dimensions and all degrees of displaciveness in the bare Hamiltonian. Further, a simple criterion for a noncritical and nonuniversal crossover from order-disorder to Gaussian behavior is evaluated in numerical detail. In one and two dimensions these critical and crossover properties are compared with predictions based on real-space decimation renormalization-group flows, as previously exploited in the phi/sup 4/ model by Beale et al. The double-Gaussian model again introduces some unique analytical advantages.

  12. A DNA double-strand break kinetic rejoining model based on the local effect model.

    PubMed

    Tommasino, F; Friedrich, T; Scholz, U; Taucher-Scholz, G; Durante, M; Scholz, M

    2013-11-01

    We report here on a DNA double-strand break (DSB) kinetic rejoining model applicable to a wide range of radiation qualities based on the DNA damage pattern predicted by the local effect model (LEM). In the LEM this pattern is derived from the SSB and DSB yields after photon irradiation in combination with an amorphous track structure approach. Together with the assumption of a giant-loop organization to describe the higher order chromatin structure this allows the definition of two different classes of DSB. These classes are defined by the level of clustering on a micrometer scale, i.e., "isolated DSB" (iDSB) are characterized by a single DSB in a giant loop and "clustered DSB" (cDSB) by two or more DSB in a loop. Clustered DSB are assumed to represent a more difficult challenge for the cell repair machinery compared to isolated DSB, and we thus hypothesize here that the fraction of isolated DSB can be identified with the fast component of rejoining, whereas clustered DSB are identified with the slow component of rejoining. The resulting predicted bi-exponential decay functions nicely reproduce the experimental curves of DSB rejoining over time obtained by means of gel electrophoresis elution techniques as reported by different labs, involving different cell types and a wide spectrum of radiation qualities. New experimental data are also presented aimed at investigating the effects of the same ion species accelerated at different energies. The results presented here further support the relevance of the proposed two classes of DSB as a basis for understanding cell response to ion irradiation. Importantly the density of DSB within DNA giant loops of around 2 Mbp size, i.e., on a micrometer scale, is identified as a key parameter for the description of radiation effectiveness.

  13. Programmable convolution via the chirp Z-transform with CCD's

    NASA Technical Reports Server (NTRS)

    Buss, D. D.

    1977-01-01

    Technique filtering by convolution in frequency domain rather than in time domain presents possible solution to problem of programmable transversal filters. Process is accomplished through utilization of chip z-transform (CZT) with charge-coupled devices

  14. A fast computation of complex convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    The cyclic convolution of complex values was obtained by a hybrid transform that is a combination of a Winograd transform and a fast complex integer transform. This new hybrid algorithm requires fewer multiplications than any previously known algorithm.

  15. Programmable convolution via the chirp Z-transform with CCD's

    NASA Technical Reports Server (NTRS)

    Buss, D. D.

    1977-01-01

    Technique filtering by convolution in frequency domain rather than in time domain presents possible solution to problem of programmable transversal filters. Process is accomplished through utilization of chip z-transform (CZT) with charge-coupled devices

  16. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  17. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. P.; Dixon, R. L.; Samei, Ehsan

    2015-03-01

    Among the various metrics that quantify radiation dose in computed tomography (CT), organ dose is one of the most representative quantities reflecting patient-specific radiation burden.1 Accurate estimation of organ dose requires one to effectively model the patient anatomy and the irradiation field. As illustrated in previous studies, the patient anatomy factor can be modeled using a library of computational phantoms with representative body habitus.2 However, the modeling of irradiation field can be practically challenging, especially for CT exams performed with tube current modulation. The central challenge is to effectively quantify the scatter irradiation field created by the dynamic change of tube current. In this study, we present a convolution-based technique to effectively quantify the primary and scatter irradiation field for TCM examinations. The organ dose for a given clinical patient can then be rapidly determined using the convolution-based method, a patient-matching technique, and a library of computational phantoms. 58 adult patients were included in this study (age range: 18-70 y.o., weight range: 60-180 kg). One computational phantom was created based on the clinical images of each patient. Each patient was optimally matched against one of the remaining 57 computational phantoms using a leave-one-out strategy. For each computational phantom, the organ dose coefficients (CTDIvol-normalized organ dose) under fixed tube current were simulated using a validated Monte Carlo simulation program. Such organ dose coefficients were multiplied by a scaling factor, (CTDIvol )organ, convolution that quantifies the regional irradiation field. The convolution-based organ dose was compared with the organ dose simulated from Monte Carlo program with TCM profiles explicitly modeled on the original phantom created based on patient images. The estimation error was within 10% across all organs and modulation profiles for abdominopelvic examination. This strategy

  18. Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-08-01

    Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.

  19. Determination of collisional linewidths and shifts by a convolution method

    NASA Technical Reports Server (NTRS)

    Pickett, H. M.

    1980-01-01

    A technique is described for fitting collisional linewidths and shifts from experimental spectral data. The method involves convoluting a low-pressure reference spectrum with a Lorentz shape function and comparing the convoluted spectrum with higher pressure spectra. Several experimental examples are given. One advantage of the method is that no extra information is needed about the instrument response function or spectral modulation. In addition, the method is shown to be relatively insensitive to the presence of reflections in the sample cell.

  20. 2-D analytical modeling of subthreshold current and subthreshold swing for ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Kunal; Singh, Balraj; Kumar, Sanjay; Jit, Satyabrata

    2017-09-01

    In this paper, the subthreshold behavior of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs has been analyzed by means of subthreshold current and subthreshold swing. The surface potential based formulation of subthreshold current and subthreshold swing is done by solving the 2-D Poisson's equations in the channel region using parabolic approximation method. The dependence of subthreshold characteristics on various device parameters such as gate length ratio, Ge mole fraction, peak doping concentration, projected range, straggle parameter etc. has been studied. The modeling results are found to be well matched with the simulation data obtained by a 2-D device simulator, ATLAS™, from SILVACO.

  1. Geodesic acoustic mode in anisotropic plasmas using double adiabatic model and gyro-kinetic equation

    SciTech Connect

    Ren, Haijun; Cao, Jintao

    2014-12-15

    Geodesic acoustic mode in anisotropic tokamak plasmas is theoretically analyzed by using double adiabatic model and gyro-kinetic equation. The bi-Maxwellian distribution function for guiding-center ions is assumed to obtain a self-consistent form, yielding pressures satisfying the magnetohydrodynamic (MHD) anisotropic equilibrium condition. The double adiabatic model gives the dispersion relation of geodesic acoustic mode (GAM), which agrees well with the one derived from gyro-kinetic equation. The GAM frequency increases with the ratio of pressures, p{sub ⊥}/p{sub ∥}, and the Landau damping rate is dramatically decreased by p{sub ⊥}/p{sub ∥}. MHD result shows a low-frequency zonal flow existing for all p{sub ⊥}/p{sub ∥}, while according to the kinetic dispersion relation, no low-frequency branch exists for p{sub ⊥}/p{sub ∥}≳ 2.

  2. High correlation of double Debye model parameters in skin cancer detection.

    PubMed

    Truong, Bao C Q; Tuan, H D; Fitzgerald, Anthony J; Wallace, Vincent P; Nguyen, H T

    2014-01-01

    The double Debye model can be used to capture the dielectric response of human skin in terahertz regime due to high water content in the tissue. The increased water proportion is widely considered as a biomarker of carcinogenesis, which gives rise of using this model in skin cancer detection. Therefore, the goal of this paper is to provide a specific analysis of the double Debye parameters in terms of non-melanoma skin cancer classification. Pearson correlation is applied to investigate the sensitivity of these parameters and their combinations to the variation in tumor percentage of skin samples. The most sensitive parameters are then assessed by using the receiver operating characteristic (ROC) plot to confirm their potential of classifying tumor from normal skin. Our positive outcomes support further steps to clinical application of terahertz imaging in skin cancer delineation.

  3. Double Folding Potential of Different Interaction Models for 16O + 12C Elastic Scattering

    NASA Astrophysics Data System (ADS)

    Hamada, Sh.; Bondok, I.; Abdelmoatmed, M.

    2016-12-01

    The elastic scattering angular distributions for 16O + 12C nuclear system have been analyzed using double folding potential of different interaction models: CDM3Y1, CDM3Y6, DDM3Y1 and BDM3Y1. We have extracted the renormalization factor N r for the different concerned interaction models. Potential created by BDM3Y1 model of interaction has the shallowest depth which reflects the necessity to use higher renormalization factor. The experimental angular distributions for 16O + 12C nuclear system in the energy range 115.9-230 MeV exhibited unmistakable refractive features and rainbow phenomenon.

  4. On the vibration of double-walled carbon nanotubes using molecular structural and cylindrical shell models

    NASA Astrophysics Data System (ADS)

    Ansari, R.; Rouhi, S.; Aryayi, M.

    2016-01-01

    The vibrational behavior of double-walled carbon nanotubes is studied by the use of the molecular structural and cylindrical shell models. The spring elements are employed to model the van der Waals interaction. The effects of different parameters such as geometry, chirality, atomic structure and end constraint on the vibration of nanotubes are investigated. Besides, the results of two aforementioned approaches are compared. It is indicated that by increasing the nanotube side length and radius, the computationally efficient cylindrical shell model gives rational results.

  5. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  6. Piano Transcription with Convolutional Sparse Lateral Inhibition

    DOE PAGES

    Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon

    2017-02-08

    This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less

  7. Accelerated unsteady flow line integral convolution.

    PubMed

    Liu, Zhanping; Moorhead, Robert J

    2005-01-01

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality.

  8. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  9. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  10. Supernova Type Ia progenitors from merging double white dwarfs. Using a new population synthesis model

    NASA Astrophysics Data System (ADS)

    Toonen, S.; Nelemans, G.; Portegies Zwart, S.

    2012-10-01

    Context. The study of Type Ia supernovae (SNIa) has lead to greatly improved insights into many fields in astrophysics, e.g. cosmology, and also into the metal enrichment of the universe. Although a theoretical explanation of the origin of these events is still lacking, there is a general consensus that SNIa are caused by the thermonuclear explosions of carbon/oxygen white dwarfs with masses near the Chandrasekhar mass. Aims: We investigate the potential contribution to the supernova Type Ia rate from the population of merging double carbon-oxygen white dwarfs. We aim to develop a model that fits the observed SNIa progenitors as well as the observed close double white dwarf population. We differentiate between two scenarios for the common envelope (CE) evolution; the α-formalism based on the energy equation and the γ-formalism that is based on the angular momentum equation. In one model we apply the α-formalism throughout. In the second model the γ-formalism is applied, unless the binary contains a compact object or the CE is triggered by a tidal instability for which the α-formalism is used. Methods: The binary population synthesis code SeBa was used to evolve binary systems from the zero-age main sequence to the formation of double white dwarfs and subsequent mergers. SeBa has been thoroughly updated since the last publication of the content of the code. Results: The limited sample of observed double white dwarfs is better represented by the simulated population using the γ-formalism for the first CE phase than the α-formalism. For both CE formalisms, we find that although the morphology of the simulated delay time distribution matches that of the observations within the errors, the normalisation and time-integrated rate per stellar mass are a factor ~7-12 lower than observed. Furthermore, the characteristics of the simulated populations of merging double carbon-oxygen white dwarfs are discussed and put in the context of alternative SNIa models for merging

  11. An Inverse Model of Double Diffusive Convection in the Beaufort Sea

    DTIC Science & Technology

    2009-12-01

    Master’s Thesis 4. TITLE AND SUBTITLE An Inverse Model of Double Diffusive Convection in the Beaufort Sea 6. AUTHOR(S) Jeremiah E. Chaplin 5 ...convection and mixing within the homogeneous layers............................... 5 Figure 4. Ice tethered profiler system....................14...Figure 5 . Location of ITP 1-6.............................15 Figure 6. Temperature – Salinity plot for ITPs 1-6........18 Figure 7. Histogram of data

  12. Dynamics of a stochastic SIS model with double epidemic diseases driven by Lévy jumps

    NASA Astrophysics Data System (ADS)

    Zhang, Xinhong; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir

    2017-04-01

    This paper is to investigate the dynamics of a stochastic SIS epidemic model with saturated incidence rate and double epidemic diseases which make the research more complex. The environment variability in this study is characterized by white noise and jump noise. Sufficient conditions for the extinction and persistence in the mean of two epidemic diseases are obtained. It is shown that the two diseases can coexist under appropriate conditions. Finally, numerical simulations are introduced to illustrate the results developed.

  13. Modeling of the double leakage and leakage spillage flows in axial flow compressors

    NASA Astrophysics Data System (ADS)

    Du, Hui; Yu, Xianjun; Liu, Baojie

    2014-04-01

    A model to predict the double leakage and tip leakage leading edge spillage flows was developed. This model was combined by a TLV trajectory model and a TLV diameter model and formed as a function of compressor one-dimensional design parameters, i.e. the compressor massflow coefficient, ϕ and compressor loading coefficient, Ψ, and some critical blade geometrical parameters, i.e. blade solidity, σ, stagger angle, β S , blade chord length, C, and blade pitch length, S. By using this model, the double leakage and tip leakage leading edge spillage flow could be predicted even at the compressor preliminary design process. Considering the leading edge spillage flow usually indicates the inception of spike-type stall, i.e. the compressor is a tip critical design, this model could also be used as a tool to choose the critical design parameters for designers. At last, some experimental data from literature was used to validate the model and the results proved that the model was reliable.

  14. Synthesising Primary Reflections by Marchenko Redatuming and Convolutional Interferometry

    NASA Astrophysics Data System (ADS)

    Curtis, A.

    2015-12-01

    Standard active-source seismic processing and imaging steps such as velocity analysis and reverse time migration usually provide best results when all reflected waves in the input data are primaries (waves that reflect only once). Multiples (recorded waves that reflect multiple times) represent a source of coherent noise in data that must be suppressed to avoid imaging artefacts. Consequently, multiple-removal methods have been a primcipal direction of active-source seismic research for decades. We describe a new method to estimate primaries directly, which obviates the need for multiple removal. Primaries are constructed within convolutional interferometry by combining first arriving events of up-going and direct wave down-going Green's functions to virtual receivers in the subsurface. The required up-going wavefields to virtual receivers along discrete subsurface boundaries can be constructed using Marchenko redatuming. Crucially, this is possible without detailed models of the Earth's subsurface velocity structure: similarly to most migration techniques, the method only requires surface reflection data and estimates of direct (non-reflected) arrivals between subsurface sources and the acquisition surface. The method is demonstrated on a stratified synclinal model. It is shown both to improve reverse time migration compared to standard methods, and to be particularly robust against errors in the reference velocity model used.

  15. Dystrophin and dysferlin double mutant mice: a novel model for rhabdomyosarcoma.

    PubMed

    Hosur, Vishnu; Kavirayani, Anoop; Riefler, Jennifer; Carney, Lisa M B; Lyons, Bonnie; Gott, Bruce; Cox, Gregory A; Shultz, Leonard D

    2012-05-01

    Although researchers have yet to establish a link between muscular dystrophy (MD) and sarcomas in human patients, literature suggests that the MD genes dystrophin and dysferlin act as tumor suppressor genes in mouse models of MD. For instance, dystrophin-deficient mdx and dysferlin-deficient A/J mice, models of human Duchenne MD and limb-girdle MD type 2B, respectively, develop mixed sarcomas with variable penetrance and latency. To further establish the correlation between MD and sarcoma development, and to test whether a combined deletion of dystrophin and dysferlin exacerbates MD and augments the incidence of sarcomas, we generated dystrophin and dysferlin double mutant mice (STOCK-Dysf(prmd)Dmd(mdx-5Cv)). Not surprisingly, the double mutant mice develop severe MD symptoms and, moreover, develop rhabdomyosarcoma (RMS) at an average age of 12 months, with an incidence of >90%. Histological and immunohistochemical analyses, using a panel of antibodies against skeletal muscle cell proteins, electron microscopy, cytogenetics, and molecular analysis reveal that the double mutant mice develop RMS. The present finding bolsters the correlation between MD and sarcomas, and provides a model not only to examine the cellular origins but also to identify mechanisms and signal transduction pathways triggering development of RMS.

  16. Internal flow numerical simulation of double-suction centrifugal pump using DES model

    NASA Astrophysics Data System (ADS)

    Zhou, P. J.; Wang, F. J.; Yang, M.

    2012-11-01

    It is a challenging task for the flow simulation for a double-suction centrifugal pump, because the wall effects are strong in this type of pumps. Detached-eddy simulation (DES), referred as a hybrid RANS-LES approach, has emerged recently as a potential compromise between RANS based turbulence models and Large Eddy Simulation. In this approach, the unsteady RANS model is employed in the boundary layer, while the LES treatment is applied to the separated region. In this paper, S-A DES method and SST k-ω DES method are applied to the numerical simulation for the 3D flow in whole passage of a double-suction centrifugal pump. The unsteady flow field including velocity and pressure distributions is obtained. The head and efficiency of the pump are predicted and compared with experimental results. According to the calculated results, S-A DES model is easy to control the partition of the simulation when using near wall grid with 30 < y+<300 control approach. It also has better performance on efficiency and accuracy than SST k - ω DES method. S-A DES method is more suitable for solving the unsteady flow in double-suction centrifugal pump. S-A DES method can capture more flow phenomenon than SST k - ω DES method. In addition, it can accurately predict the power performance under different flow conditions, and can reflect pressure fluctuation characteristics.

  17. Experiments and Modeling of Boric Acid Permeation through Double-Skinned Forward Osmosis Membranes.

    PubMed

    Luo, Lin; Zhou, Zhengzhong; Chung, Tai-Shung; Weber, Martin; Staudt, Claudia; Maletzko, Christian

    2016-07-19

    Boron removal is one of the great challenges in modern wastewater treatment, owing to the unique small size and fast diffusion rate of neutral boric acid molecules. As forward osmosis (FO) membranes with a single selective layer are insufficient to reject boron, double-skinned FO membranes with boron rejection up to 83.9% were specially designed for boron permeation studies. The superior boron rejection properties of double-skinned FO membranes were demonstrated by theoretical calculations, and verified by experiments. The double-skinned FO membrane was fabricated using a sulfonated polyphenylenesulfone (sPPSU) polymer as the hydrophilic substrate and polyamide as the selective layer material via interfacial polymerization on top and bottom surfaces. A strong agreement between experimental data and modeling results validates the membrane design and confirms the success of model prediction. The effects of key parameters on boron rejection, such as boron permeability of both selective layers and structure parameter, were also investigated in-depth with the mathematical modeling. This study may provide insights not only for boron removal from wastewater, but also open up the design of next generation FO membranes to eliminate low-rejection molecules in wider applications.

  18. Family Adaptation to Stroke: A Metasynthesis of Qualitative Research based on Double ABCX Model.

    PubMed

    Hesamzadeh, Ali; Dalvandi, Asghar; Bagher Maddah, Sadat; Fallahi Khoshknab, Masoud; Ahmadi, Fazlollah

    2015-09-01

    There is growing interest in synthesizing qualitative research. Stroke is a very common cause of disability often leaving stroke survivors dependent on their family. This study reports an interpretive review of research into subjective experience of families with stroke survivors based on the components of the Double ABCX Model including stressors, resources, perception, coping strategies, and adaptation of these families. Metasynthesis was applied to review qualitative research looking at stroke family members' experiences and responses to having a stroke survivor as a family member. Electronic database from 1990 to 2013 were searched and 18 separate studies were identified. Each study was evaluated using methodological criteria to provide a context for interpretation of substantive findings. Principal findings were extracted and synthesized under the Double ABCX Model elements. Loss of independence and uncertainty (as stressors), struggling with new phase of life (as perception), refocusing time and energy on elements of recovery process (as coping strategy), combined resources including personal, internal and external family support (as resources), and striking a balance (as adaptation) were identified as main categories. Family members of stroke survivor respond cognitively and practically and attempt to keep a balance between survivor's and their own everyday lives. The results of the study are in conformity with the tenets of the Double ABCX Model. Family adaptation is a dynamic process and the present study findings provide rich information on proper assessment and intervention to the practitioners working with families of stroke survivors. Copyright © 2015. Published by Elsevier B.V.

  19. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double-layer potential structure. A simple model is presented in which this current re-distribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double-layer potential. The flank charging may be represented as that of a nonlinear transmission. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a 1-d simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  20. [Verification of the double dissociation model of shyness using the implicit association test].

    PubMed

    Fujii, Tsutomu; Aikawa, Atsushi

    2013-12-01

    The "double dissociation model" of shyness proposed by Asendorpf, Banse, and Mtücke (2002) was demonstrated in Japan by Aikawa and Fujii (2011). However, the generalizability of the double dissociation model of shyness was uncertain. The present study examined whether the results reported in Aikawa and Fujii (2011) would be replicated. In Study 1, college students (n = 91) completed explicit self-ratings of shyness and other personality scales. In Study 2, forty-eight participants completed IAT (Implicit Association Test) for shyness, and their friends (n = 141) rated those participants on various personality scales. The results revealed that only the explicit self-concept ratings predicted other-rated low praise-seeking behavior, sociable behavior and high rejection-avoidance behavior (controlled shy behavior). Only the implicit self-concept measured by the shyness IAT predicted other-rated high interpersonal tension (spontaneous shy behavior). The results of this study are similar to the findings of the previous research, which supports generalizability of the double dissociation model of shyness.

  1. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer (DL) in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double layer potential structure. A simple model is presented in which this current redistribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double layer potential. The flank charging may be represented as that of a nonlinear transmission line. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a one-dimensional simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  2. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double-layer potential structure. A simple model is presented in which this current re-distribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double-layer potential. The flank charging may be represented as that of a nonlinear transmission. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a 1-d simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  3. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  4. Convolution-based estimation of organ dose in tube current modulated CT

    PubMed Central

    Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan

    2016-01-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients (hOrgan) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate (CTDIvol)organ, convolution values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying (CTDIvol)organ, convolution with the organ dose coefficients (hOrgan). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the

  5. Two-parameter double-oscillator model of Mathews-Lakshmanan type: Series solutions and supersymmetric partners

    SciTech Connect

    Schulze-Halberg, Axel E-mail: xbataxel@gmail.com; Wang, Jie

    2015-07-15

    We obtain series solutions, the discrete spectrum, and supersymmetric partners for a quantum double-oscillator system. Its potential features a superposition of the one-parameter Mathews-Lakshmanan interaction and a one-parameter harmonic or inverse harmonic oscillator contribution. Furthermore, our results are transferred to a generalized Pöschl-Teller model that is isospectral to the double-oscillator system.

  6. Comparison of the accuracy of the calibration model on the double and single integrating sphere systems

    NASA Astrophysics Data System (ADS)

    Singh, A.; Karsten, A.

    2011-06-01

    The accuracy of the calibration model for the single and double integrating sphere systems are compared for a white light system. A calibration model is created from a matrix of samples with known absorption and reduced scattering coefficients. In this instance the samples are made using different concentrations of intralipid and black ink. The total and diffuse transmittance and reflectance is measured on both setups and the accuracy of each model compared by evaluating the prediction errors of the calibration model for the different systems. Current results indicate that the single integrating sphere setup is more accurate than the double system method. This is based on the low prediction errors of the model for the single sphere system for a He-Ne laser as well as a white light source. The model still needs to be refined for more absorption factors. Tests on the prediction accuracies were then determined by extracting the optical properties of solid resin based phantoms for each system. When these properties of the phantoms were used as input to the modeling software excellent agreement between measured and simulated data was found for the single sphere systems.

  7. Mechanistic Modelling and Bayesian Inference Elucidates the Variable Dynamics of Double-Strand Break Repair.

    PubMed

    Woods, Mae L; Barnes, Chris P

    2016-10-01

    DNA double-strand breaks are lesions that form during metabolism, DNA replication and exposure to mutagens. When a double-strand break occurs one of a number of repair mechanisms is recruited, all of which have differing propensities for mutational events. Despite DNA repair being of crucial importance, the relative contribution of these mechanisms and their regulatory interactions remain to be fully elucidated. Understanding these mutational processes will have a profound impact on our knowledge of genomic instability, with implications across health, disease and evolution. Here we present a new method to model the combined activation of non-homologous end joining, single strand annealing and alternative end joining, following exposure to ionising radiation. We use Bayesian statistics to integrate eight biological data sets of double-strand break repair curves under varying genetic knockouts and confirm that our model is predictive by re-simulating and comparing to additional data. Analysis of the model suggests that there are at least three disjoint modes of repair, which we assign as fast, slow and intermediate. Our results show that when multiple data sets are combined, the rate for intermediate repair is variable amongst genetic knockouts. Further analysis suggests that the ratio between slow and intermediate repair depends on the presence or absence of DNA-PKcs and Ku70, which implies that non-homologous end joining and alternative end joining are not independent. Finally, we consider the proportion of double-strand breaks within each mechanism as a time series and predict activity as a function of repair rate. We outline how our insights can be directly tested using imaging and sequencing techniques and conclude that there is evidence of variable dynamics in alternative repair pathways. Our approach is an important step towards providing a unifying theoretical framework for the dynamics of DNA repair processes.

  8. Shell model analysis of competing contributions to the double-β decay of 48Ca

    NASA Astrophysics Data System (ADS)

    Horoi, Mihai

    2013-01-01

    Background: Neutrinoless double-β decay, if observed, would reveal physics beyond the standard model of particle physics; namely, it would prove that neutrinos are Majorana fermions and that the lepton number is not conserved.Purpose: The analysis of the results of neutrinoless double-β decay observations requires an accurate knowledge of several nuclear matrix elements (NME) for different mechanisms that may contribute to the decay. We provide a complete analysis of these NME for the decay of the ground state (g.s.) of 48Ca to the g.s. 01+ and first excited 02+ state of 48Ti.Method: For the analysis we used the nuclear shell model with effective two-body interactions that were fine-tuned to describe the low-energy spectroscopy of pf-shell nuclei. We checked our model by calculating the two-neutrino transition probability to the g.s. of 48Ti. We also make predictions for the transition to the first excited 02+ state of 48Ti.Results: We present results for all NME relevant for the neutrinoless transitions to the 01+ and 02+ states, and using the lower experimental limit for the g.s. to g.s. half-life, we extract upper limits for the neutrino physics parameters.Conclusions: We provide accurate NME for the two-neutrino and neutrinoless double-β decay transitions in the A=48 system, which can be further used to analyze the experimental results of double-β decay experiments when they become available.

  9. Mechanistic Modelling and Bayesian Inference Elucidates the Variable Dynamics of Double-Strand Break Repair

    PubMed Central

    2016-01-01

    DNA double-strand breaks are lesions that form during metabolism, DNA replication and exposure to mutagens. When a double-strand break occurs one of a number of repair mechanisms is recruited, all of which have differing propensities for mutational events. Despite DNA repair being of crucial importance, the relative contribution of these mechanisms and their regulatory interactions remain to be fully elucidated. Understanding these mutational processes will have a profound impact on our knowledge of genomic instability, with implications across health, disease and evolution. Here we present a new method to model the combined activation of non-homologous end joining, single strand annealing and alternative end joining, following exposure to ionising radiation. We use Bayesian statistics to integrate eight biological data sets of double-strand break repair curves under varying genetic knockouts and confirm that our model is predictive by re-simulating and comparing to additional data. Analysis of the model suggests that there are at least three disjoint modes of repair, which we assign as fast, slow and intermediate. Our results show that when multiple data sets are combined, the rate for intermediate repair is variable amongst genetic knockouts. Further analysis suggests that the ratio between slow and intermediate repair depends on the presence or absence of DNA-PKcs and Ku70, which implies that non-homologous end joining and alternative end joining are not independent. Finally, we consider the proportion of double-strand breaks within each mechanism as a time series and predict activity as a function of repair rate. We outline how our insights can be directly tested using imaging and sequencing techniques and conclude that there is evidence of variable dynamics in alternative repair pathways. Our approach is an important step towards providing a unifying theoretical framework for the dynamics of DNA repair processes. PMID:27741226

  10. Modeling and experimental results of low-background extrinsic double-injection IR detector response

    NASA Astrophysics Data System (ADS)

    Zaletaev, N. B.; Filachev, A. M.; Ponomarenko, V. P.; Stafeev, V. I.

    2006-05-01

    Bias-dependent response of an extrinsic double-injection IR detector under irradiation from extrinsic and intrinsic responsivity spectral ranges was obtained analytically and through numerical modeling. The model includes the transient response and generation-recombination noise as well. It is shown that a great increase in current responsivity (by orders of magnitude) without essential change in detectivity can take place in the range of extrinsic responsivity for detectors on semiconductor materials with long-lifetime minority charge carriers if double-injection photodiodes are made on them instead photoconductive detectors. Field dependence of the lifetimes and mobilities of charge carriers essentially influences detector characteristics especially in the voltage range where the drift length of majority carriers is greater than the distance between the contacts. The model developed is in good agreement with experimental data obtained for n-Si:Cd, p-Ge:Au, and Ge:Hg diodes, as well as for diamond detectors of radiations. A BLIP-detection responsivity of about 2000 A/W (for a wavelength of 10 micrometers) for Ge:Hg diodes has been reached in a frequency range of 500 Hz under a background of 6 x 10 11 cm -2s -1 at a temperature of 20 K. Possibilities of optimization of detector performance are discussed. Extrinsic double-injection photodiodes and other detectors of radiations with internal gain based on double injection are reasonable to use in the systems liable to strong disturbance action, in particular to vibrations, because high responsivity can ensure higher resistance to interference.

  11. Convolution power spectrum analysis for FMRI data based on prior image signal.

    PubMed

    Zhang, Jiang; Chen, Huafu; Fang, Fang; Liao, Wei

    2010-02-01

    Functional MRI (fMRI) data-processing methods based on changes in the time domain involve, among other things, correlation analysis and use of the general linear model with statistical parametric mapping (SPM). Unlike conventional fMRI data analysis methods, which aim to model the blood-oxygen-level-dependent (BOLD) response of voxels as a function of time, the theory of power spectrum (PS) analysis focuses completely on understanding the dynamic energy change of interacting systems. We propose a new convolution PS (CPS) analysis of fMRI data, based on the theory of matched filtering, to detect brain functional activation for fMRI data. First, convolution signals are computed between the measured fMRI signals and the image signal of prior experimental pattern to suppress noise in the fMRI data. Then, the PS density analysis of the convolution signal is specified as the quantitative analysis energy index of BOLD signal change. The data from simulation studies and in vivo fMRI studies, including block-design experiments, reveal that the CPS method enables a more effective detection of some aspects of brain functional activation, as compared with the canonical PS SPM and the support vector machine methods. Our results demonstrate that the CPS method is useful as a complementary analysis in revealing brain functional information regarding the complex nature of fMRI time series.

  12. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    NASA Astrophysics Data System (ADS)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  13. Vibro-acoustic modelling of aircraft double-walls with structural links using Statistical Energy Analysis

    NASA Astrophysics Data System (ADS)

    Campolina, Bruno L.

    The prediction of aircraft interior noise involves the vibroacoustic modelling of the fuselage with noise control treatments. This structure is composed of a stiffened metallic or composite panel, lined with a thermal and acoustic insulation layer (glass wool), and structurally connected via vibration isolators to a commercial lining panel (trim). The goal of this work aims at tailoring the noise control treatments taking design constraints such as weight and space optimization into account. For this purpose, a representative aircraft double-wall is modelled using the Statistical Energy Analysis (SEA) method. Laboratory excitations such as diffuse acoustic field and point force are addressed and trends are derived for applications under in-flight conditions, considering turbulent boundary layer excitation. The effect of the porous layer compression is firstly addressed. In aeronautical applications, compression can result from the installation of equipment and cables. It is studied analytically and experimentally, using a single panel and a fibrous uniformly compressed over 100% of its surface. When compression increases, a degradation of the transmission loss up to 5 dB for a 50% compression of the porous thickness is observed mainly in the mid-frequency range (around 800 Hz). However, for realistic cases, the effect should be reduced since the compression rate is lower and compression occurs locally. Then the transmission through structural connections between panels is addressed using a four-pole approach that links the force-velocity pair at each side of the connection. The modelling integrates experimental dynamic stiffness of isolators, derived using an adapted test rig. The structural transmission is then experimentally validated and included in the double-wall SEA model as an equivalent coupling loss factor (CLF) between panels. The tested structures being flat, only axial transmission is addressed. Finally, the dominant sound transmission paths are

  14. Double-stranded DNA organization in bacteriophage heads: An alternative toroid-based model

    SciTech Connect

    Hud, N.V.

    1995-10-01

    Studies of the organization of double-stranded DNA within bacteriophage heads during the past four decades have produced a wealth of data. However, despite the presentation of numerous models, the true organization of DNA within phage heads remains unresolved. The observations of toroidal DNA structures in electron micrographs of phage lysates have long been cited as support for the organization of DNA in a spool-like fashion. This particular model, like all other models, has not been found to be consistent with all available data. Recently, the authors proposed that DNA within toroidal condensates produced in vitro is organized in a manner significantly different from that suggested by the spool model. This new toroid model has allowed the development of an alternative model for DNA organization within bacteriophage heads that is consistent with a wide range of biophysical data. Here the authors propose that bacteriophage DNA is packaged in a toroid that is folded into a highly compact structure.

  15. Exploring convolutional neural networks for drug–drug interaction extraction

    PubMed Central

    Segura-Bedmar, Isabel; Martínez, Paloma

    2017-01-01

    Abstract Drug–drug interaction (DDI), which is a specific type of adverse drug reaction, occurs when a drug influences the level or activity of another drug. Natural language processing techniques can provide health-care professionals with a novel way of reducing the time spent reviewing the literature for potential DDIs. The current state-of-the-art for the extraction of DDIs is based on feature-engineering algorithms (such as support vector machines), which usually require considerable time and effort. One possible alternative to these approaches includes deep learning. This technique aims to automatically learn the best feature representation from the input data for a given task. The purpose of this paper is to examine whether a convolutional neural network (CNN), which only uses word embeddings as input features, can be applied successfully to classify DDIs from biomedical texts. Proposed herein, is a CNN architecture with only one hidden layer, thus making the model more computationally efficient, and we perform detailed experiments in order to determine the best settings of the model. The goal is to determine the best parameter of this basic CNN that should be considered for future research. The experimental results show that the proposed approach is promising because it attained the second position in the 2013 rankings of the DDI extraction challenge. However, it obtained worse results than previous works using neural networks with more complex architectures. PMID:28605776

  16. Delta function convolution method (DFCM) for fluorescence decay experiments

    NASA Astrophysics Data System (ADS)

    Zuker, M.; Szabo, A. G.; Bramall, L.; Krajcarski, D. T.; Selinger, B.

    1985-01-01

    A rigorous and convenient method of correcting for the wavelength variation of the instrument response function in time correlated photon counting fluorescence decay measurements is described. The method involves convolution of a modified functional form F˜s of the physical model with a reference data set measured under identical conditions as the measurement of the sample. The method is completely general in that an appropriate functional form may be found for any physical model of the excited state decay process. The modified function includes a term which is a Dirac delta function and terms which give the correct decay times and preexponential values in which one is interested. None of the data is altered in any way, permitting correct statistical analysis of the fitting. The method is readily adaptable to standard deconvolution procedures. The paper describes the theory and application of the method together with fluorescence decay results obtained from measurements of a number of different samples including diphenylhexatriene, myoglobin, hemoglobin, 4', 6-diamidine-2-phenylindole (DAPI), and lysine-trytophan-lysine.

  17. Convolution Neural Networks With Two Pathways for Image Style Recognition.

    PubMed

    Sun, Tiancheng; Wang, Yulong; Yang, Jian; Hu, Xiaolin

    2017-09-01

    Automatic recognition of an image's style is important for many applications, including artwork analysis, photo organization, and image retrieval. Traditional convolution neural network (CNN) approach uses only object features for image style recognition. This approach may not be optimal, because the same object in two images may have different styles. We propose a CNN architecture with two pathways extracting object features and texture features, respectively. The object pathway represents the standard CNN architecture and the texture pathway intermixes the object pathway by outputting the gram matrices of intermediate features in the object pathway. The two pathways are jointly trained. In experiments, two deep CNNs, AlexNet and VGG-19, pretrained on the ImageNet classification data set are fine-tuned for this task. For any model, the two-pathway architecture performs much better than individual pathways, which indicates that the two pathways contain complementary information of an image's style. In particular, the model based on VGG-19 achieves the state-of-the-art results on three benchmark data sets, WikiPaintings, Flickr Style, and AVA Style.

  18. Microsecond kinetics in model single- and double-stranded amylose polymers.

    PubMed

    Sattelle, Benedict M; Almond, Andrew

    2014-05-07

    Amylose, a component of starch with increasing biotechnological significance, is a linear glucose polysaccharide that self-organizes into single- and double-helical assemblies. Starch granule packing, gelation and inclusion-complex formation result from finely balanced macromolecular kinetics that have eluded precise experimental quantification. Here, graphics processing unit (GPU) accelerated multi-microsecond aqueous simulations are employed to explore conformational kinetics in model single- and double-stranded amylose. The all-atom dynamics concur with prior X-ray and NMR data while surprising and previously overlooked microsecond helix-coil, glycosidic linkage and pyranose ring exchange are hypothesized. In a dodecasaccharide, single-helical collapse was correlated with linkages and rings transitioning from their expected syn and (4)C1 chair conformers. The associated microsecond exchange rates were dependent on proximity to the termini and chain length (comparing hexa- and trisaccharides), while kinetic features of dodecasaccharide linkage and ring flexing are proposed to be a good model for polymers. Similar length double-helices were stable on microsecond timescales but the parallel configuration was sturdier than the antiparallel equivalent. In both, tertiary organization restricted local chain dynamics, implying that simulations of single amylose strands cannot be extrapolated to dimers. Unbiased multi-microsecond simulations of amylose are proposed as a valuable route to probing macromolecular kinetics in water, assessing the impact of chemical modifications on helical stability and accelerating the development of new biotechnologies.

  19. Computer modelling of double doped SrAl2O4 for phosphor applications

    NASA Astrophysics Data System (ADS)

    Jackson, R. A.; Kavanagh, L. A.; Snelgrove, R. A.

    2017-02-01

    This paper describes a modelling study of SrAl2O4, which has applications as a phosphor material when doped with Eu2+ and Dy3+ ions. The procedure for modelling the pure and doped material is described and then results are presented for the single and double doped material. Solution energies are calculated and used to deduce dopant location, and mean field calculations are used to predict the effect of doping on crystal lattice parameter. Possible charge compensation mechanisms for Dy3+ ions substituting at a Sr2+ site are discussed.

  20. Scale model experiments on the insertion loss of wide and double barriers

    PubMed

    Wadsworth; Chambers

    2000-05-01

    The insertion loss of wide and double barriers is investigated through scale model experiments. Such configurations appear in outdoor sound propagation problems such as highway noise reduction and community noise control. The Biot-Tolstoy-Medwin (BTM) time domain wedge formulation for multiple diffraction [J. Acoust. Soc. Am. 72, 1005-1013 (1982)] is used to predict the acoustic response of an impulsive source. Evaluation of the insertion loss at discrete frequencies is accomplished via the fast Fourier transform (FFT). Good agreement has been found between the BTM model and experimental data for all configurations tested.

  1. Simulation of double layers in a model auroral circuit with nonlinear impedance

    NASA Technical Reports Server (NTRS)

    Smith, R. A.

    1986-01-01

    A reduced circuit description of the U-shaped potential structure of a discrete auroral arc, consisting of the flank transmission line plus parallel-electric-field region, is used to provide the boundary condition for one-dimensional simulations of the double-layer evolution. The model yields asymptotic scalings of the double-layer potential, as a function of an anomalous transport coefficient alpha and of the perpendicular length scale l(a) of the arc. The arc potential phi(DL) scales approximately linearly with alpha, and for alpha fixed phi (DL) about l(a) to the z power. Using parameters appropriate to the auroral zone acceleration region, potentials of phi (DPL) 10 kV scale to projected ionospheric dimensions of about 1 km, with power flows of the order of magnitude of substorm dissipation rates.

  2. Quantum Entanglement in Double Quantum Systems and Jaynes-Cummings Model.

    PubMed

    Jakubczyk, Paweł; Majchrowski, Klaudiusz; Tralle, Igor

    2017-12-01

    In the paper, we proposed a new approach to producing the qubits in electron transport in low-dimensional structures such as double quantum wells or double quantum wires (DQW). The qubit could arise as a result of quantum entanglement of two specific states of electrons in DQW structure. These two specific states are the symmetric and antisymmetric (with respect to inversion symmetry) states arising due to tunneling across the structure, while entanglement could be produced and controlled by means of the source of nonclassical light. We examined the possibility to produce quantum entanglement in the framework of Jaynes-Cummings model and have shown that at least in principle, the entanglement can be achieved due to series of "revivals" and "collapses" in the population inversion due to the interaction of a quantized single-mode EM field with a two-level system.

  3. A double layer model for solar X-ray and microwave pulsations

    NASA Technical Reports Server (NTRS)

    Tapping, K. F.

    1986-01-01

    The wide range of wavelengths over which quasi-periodic pulsations have been observed suggests that the mechanism causing them acts upon the supply of high energy electrons driving the emission processes. A model is described which is based upon the radial shrinkage of a magnetic flux tube. The concentration of the current, along with the reduction in the number of available charge carriers, can rise to a condition where the current demand exceeds the capacity of the thermal electrons. Driven by the large inductance of the external current circuit, an instability takes place in the tube throat, resulting in the formation of a potential double layer, which then accelerates electrons and ions to MeV energies. The double layer can be unstable, collapsing and reforming repeatedly. The resulting pulsed particle beams give rise to pulsating emission which are observed at radio and X-ray wavelengths.

  4. Kinetic model for an auroral double layer that spans many gravitational scale heights

    SciTech Connect

    Robertson, Scott

    2014-12-15

    The electrostatic potential profile and the particle densities of a simplified auroral double layer are found using a relaxation method to solve Poisson's equation in one dimension. The electron and ion distribution functions for the ionosphere and magnetosphere are specified at the boundaries, and the particle densities are found from a collisionless kinetic model. The ion distribution function includes the gravitational potential energy; hence, the unperturbed ionospheric plasma has a density gradient. The plasma potential at the upper boundary is given a large negative value to accelerate electrons downward. The solutions for a wide range of dimensionless parameters show that the double layer forms just above a critical altitude that occurs approximately where the ionospheric density has fallen to the magnetospheric density. Below this altitude, the ionospheric ions are gravitationally confined and have the expected scale height for quasineutral plasma in gravity.

  5. Double layer effects in a model of proton discharge on charged electrodes

    PubMed Central

    2014-01-01

    Summary We report first results on double layer effects on proton discharge reactions from aqueous solutions to charged platinum electrodes. We have extended a recently developed combined proton transfer/proton discharge model on the basis of empirical valence bond theory to include specifically adsorbed sodium cations and chloride anions. For each of four studied systems 800–1000 trajectories of a discharging proton were integrated by molecular dynamics simulations until discharge occurred. The results show significant influences of ion presence on the average behavior of protons prior to the discharge event. Rationalization of the observed behavior cannot be based solely on the electrochemical potential (or surface charge) but needs to resort to the molecular details of the double layer structure. PMID:25161833

  6. Simulation of double layers in a model auroral circuit with nonlinear impedance

    NASA Technical Reports Server (NTRS)

    Smith, R. A.

    1986-01-01

    A reduced circuit description of the U-shaped potential structure of a discrete auroral arc, consisting of the flank transmission line plus parallel-electric-field region, is used to provide the boundary condition for one-dimensional simulations of the double-layer evolution. The model yields asymptotic scalings of the double-layer potential, as a function of an anomalous transport coefficient alpha and of the perpendicular length scale l(a) of the arc. The arc potential phi(DL) scales approximately linearly with alpha, and for alpha fixed phi (DL) about l(a) to the z power. Using parameters appropriate to the auroral zone acceleration region, potentials of phi (DPL) 10 kV scale to projected ionospheric dimensions of about 1 km, with power flows of the order of magnitude of substorm dissipation rates.

  7. Classical mapping for Hubbard operators: Application to the double-Anderson model

    SciTech Connect

    Li, Bin; Miller, William H.; Levy, Tal J.; Rabani, Eran

    2014-05-28

    A classical Cartesian mapping for Hubbard operators is developed to describe the nonequilibrium transport of an open quantum system with many electrons. The mapping of the Hubbard operators representing the many-body Hamiltonian is derived by using analogies from classical mappings of boson creation and annihilation operators vis-à-vis a coherent state representation. The approach provides qualitative results for a double quantum dot array (double Anderson impurity model) coupled to fermionic leads for a range of bias voltages, Coulomb couplings, and hopping terms. While the width and height of the conduction peaks show deviations from the master equation approach considered to be accurate in the limit of weak system-leads couplings and high temperatures, the Hubbard mapping captures all transport channels involving transition between many electron states, some of which are not captured by approximate nonequilibrium Green function closures.

  8. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Double-observer line transect surveys with Markov-modulated Poisson process models for animal availability.

    PubMed

    Borchers, D L; Langrock, R

    2015-12-01

    We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  10. Using ARM Measurements to Understand and Reduce the Double ITCZ Biases in the Community Atmospheric Model

    SciTech Connect

    Zhang, Minghua

    2016-12-08

    1. Understanding of the observed variability of ITCZ in the equatorial eastern Pacific. The annual mean precipitation in the eastern Pacific has a maximum zonal band north of the equator in the ITCZ where the maximum SST is located. During the boreal spring (referring to February, March, and April throughout the present paper), because of the accumulated solar radiation heating and oceanic heat transport, a secondary maximum of SST exists in the southeastern equatorial Pacific. Associated with this warm SST is also a seasonal transitional maximum of precipitation in the same region in boreal spring, exhibited as a weak double ITCZ pattern in the equatorial eastern Pacific. This climatological seasonal variation, however, varies greatly from year to year: double ITCZ in the boreal spring occurs in some years but not in other years; when there a single ITCZ, it can appear either north, south or at the equator. Understanding this observed variability is critical to find the ultimate cause of the double ITCZ in climate models. Seasonal variation of ITCZ south of the eastern equatorial Pacific: By analyzing data from satellites, field measurements and atmospheric reanalysis, we have found that in the region where spurious ITCZ in models occurs, there is a “seasonal cloud transition” — from stratocumulus to shallow cumulus and eventually to deep convection —in the South Equatorial Pacific (SEP) from September to April that is similar to the spatial cloud transition from the California coast to the equator. This seasonal transition is associated with increasing sea surface temperature (SST), decreasing lower tropospheric stability and large-scale subsidence. This finding of seasonal cloud transition points to the same source of model errors in the ITCZ simulations as in simulation of stratocumulus-cumulus-deep convection transition. It provides a test for climate models to simulate the relationships between clouds and large-scale atmospheric fields in a region

  11. A compact model for single material double work function gate MOSFET

    NASA Astrophysics Data System (ADS)

    Changyong, Zheng; Wei, Zhang; Tailong, Xu; Yuehua, Dai; Junning, Chen

    2013-09-01

    An analytical surface potential model for the single material double work function gate (SMDWG) MOSFET is developed based on the exact resultant solution of the two-dimensional Poisson equation. The model includes the effects of drain biases, gate oxide thickness, different combinations of S-gate and D-gate length and values of substrate doping concentration. More attention has been paid to seeking to explain the attributes of the SMDWG MOSFET, such as suppressing drain-induced barrier lowering (DIBL), accelerating carrier drift velocity and device speed. The model is verified by comparison to the simulated results using the device simulator MEDICI. The accuracy of the results obtained using our analytical model is verified using numerical simulations. The model not only offers the physical insight into device physics but also provides the basic designing guideline for the device.

  12. A Study on Equivalent Circuit Model of High-Power Density Electric Double Layer Capacitor

    NASA Astrophysics Data System (ADS)

    Yamada, Tetsu; Yamashiro, Susumu; Sasaki, Masakazu; Araki, Shuuichi

    Various models for the equivalent circuit of EDLC (Electric Double Layer Capacitor) have been presented so far. The multi-stage connection of RC circuit is a representative model to simulate the EDLC's charge-discharge characteristic. However, since high energy density type EDLC for electric power storage has the electrostatic capacity of thousands F, the phenomenon of being almost uninfluential for the case of conventional capacitor appears in an actual measurement notably. To overcome this difficulty, we develop an equivalent circuit model using a nonlinear model that considers the voltage dependency of the electrostatic capacity in this paper. After various simulations and comparison with experimental results, we confirmed the effectiveness of the proposed model.

  13. A modified double distribution lattice Boltzmann model for axisymmetric thermal flow

    NASA Astrophysics Data System (ADS)

    Wang, Zuo; Liu, Yan; Wang, Heng; Zhang, Jiazhong

    2017-04-01

    In this paper, a double distribution lattice Boltzmann model for axisymmetric thermal flow is proposed. In the model, the flow field is solved by a multi-relaxation-time lattice Boltzmann scheme while the temperature field by a newly proposed lattice-kinetic-based Boltzmann scheme. Chapman-Enskog analysis demonstrates that the axisymmetric energy equation in the cylindrical coordinate system can be recovered by the present lattice-kinetic-based Boltzmann scheme for temperature field. Numerical tests, including the thermal Hagen-Poiseuille flow and natural convection in a vertical annulus, have been carried out, and the results predicted by the present model agree well with the existing numerical data. Furthermore, the present model shows better numerical stability than the existing model.

  14. Ambient modal testing of a double-arch dam: the experimental campaign and model updating

    NASA Astrophysics Data System (ADS)

    García-Palacios, Jaime H.; Soria, José M.; Díaz, Iván M.; Tirado-Andrés, Francisco

    2016-09-01

    A finite element model updating of a double-curvature-arch dam (La Tajera, Spain) is carried out hereof using the modal parameters obtained from an operational modal analysis. That is, the system modal dampings, natural frequencies and mode shapes have been identified using output-only identification techniques under environmental loads (wind, vehicles). A finite element model of the dam-reservoir-foundation system was initially created. Then, a testing campaing was then carried out from the most significant test points using high-sensitivity accelerometers wirelessly synchronized. Afterwards, the model updating of the initial model was done using a Monte Carlo based approach in order to match it to the recorded dynamic behaviour. The updated model may be used within a structural health monitoring system for damage detection or, for instance, for the analysis of the seismic response of the arch dam- reservoir-foundation coupled system.

  15. Fabrication of double-walled section models of the ITER vacuum vessel

    SciTech Connect

    Koizumi, K.; Kanamori, N.; Nakahira, M.; Itoh, Y.; Horie, M.; Tada, E.; Shimamoto, S.

    1995-12-31

    Trial fabrication of double-walled section models has been performed at Japan Atomic Energy Research Institute (JAERI) for the construction of ITER vacuum vessel. By employing TIG (Tungsten-arc Inert Gas) welding and EB (Electron Beam) welding, for each model, two full-scaled section models of 7.5 {degree} toroidal sector in the curved section at the bottom of vacuum vessel have been successfully fabricated with the final dimensional error of within {+-}5 mm to the nominal values. The sufficient technical database on the candidate fabrication procedures, welding distortion and dimensional stability of full-scaled models have been obtained through the fabrications. This paper describes the design and fabrication procedures of both full-scaled section models and the major results obtained through the fabrication.

  16. Compact model for short-channel symmetric double-gate junctionless transistors

    NASA Astrophysics Data System (ADS)

    Ávila-Herrera, F.; Cerdeira, A.; Paz, B. C.; Estrada, M.; Íñiguez, B.; Pavanello, M. A.

    2015-09-01

    In this work a compact analytical model for short-channel double-gate junctionless transistor is presented, considering variable mobility and the main short-channel effects as threshold voltage roll-off, series resistance, drain saturation voltage, channel shortening and saturation velocity. The threshold voltage shift and subthreshold slope variation is determined through the minimum value of the potential in the channel. Only eight model parameters are used. The model is physically-based, considers the total charge in the Si layer and the operating conditions in both depletion and accumulation. Model is validated by 2D simulations in ATLAS for channel lengths from 25 nm to 500 nm and for doping concentrations of 5 × 1018 and 1 × 1019 cm-3, as well as for Si layer thickness of 10 and 15 nm, in order to guarantee normally-off operation of the transistors. The model provides an accurate continuous description of the transistor behavior in all operating regions.

  17. Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination.

    PubMed

    Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi

    2017-02-20

    With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments.

  18. Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination

    PubMed Central

    Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi

    2017-01-01

    With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments. PMID:28230753

  19. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  20. Accuracy assessment of single and double difference models for the single epoch GPS compass

    NASA Astrophysics Data System (ADS)

    Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian

    2012-02-01

    The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.

  1. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    PubMed

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Convolutional neural network approach for buried target recognition in FL-LWIR imagery

    NASA Astrophysics Data System (ADS)

    Stone, K.; Keller, J. M.

    2014-05-01

    A convolutional neural network (CNN) approach to recognition of buried explosive hazards in forward-looking long-wave infrared (FL-LWIR) imagery is presented. The convolutional filters in the first layer of the network are learned in the frequency domain, making enforcement of zero-phase and zero-dc response characteristics much easier. The spatial domain representations of the filters are forced to have unit l2 norm, and penalty terms are added to the online gradient descent update to encourage orthonormality among the convolutional filters, as well smooth first and second order derivatives in the spatial domain. The impact of these modifications on the generalization performance of the CNN model is investigated. The CNN approach is compared to a second recognition algorithm utilizing shearlet and log-gabor decomposition of the image coupled with cell-structured feature extraction and support vector machine classification. Results are presented for multiple FL-LWIR data sets recently collected from US Army test sites. These data sets include vehicle position information allowing accurate transformation between image and world coordinates and realistic evaluation of detection and false alarm rates.

  3. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  4. Classification of breast cancer cytological specimen using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  5. HLA class I binding prediction via convolutional neural networks.

    PubMed

    Vang, Yeeleng S; Xie, Xiaohui

    2017-09-01

    Many biological processes are governed by protein-ligand interactions. One such example is the recognition of self and non-self cells by the immune system. This immune response process is regulated by the major histocompatibility complex (MHC) protein which is encoded by the human leukocyte antigen (HLA) complex. Understanding the binding potential between MHC and peptides can lead to the design of more potent, peptide-based vaccines and immunotherapies for infectious autoimmune diseases. We apply machine learning techniques from the natural language processing (NLP) domain to address the task of MHC-peptide binding prediction. More specifically, we introduce a new distributed representation of amino acids, name HLA-Vec, that can be used for a variety of downstream proteomic machine learning tasks. We then propose a deep convolutional neural network architecture, name HLA-CNN, for the task of HLA class I-peptide binding prediction. Experimental results show combining the new distributed representation with our HLA-CNN architecture achieves state-of-the-art results in the majority of the latest two Immune Epitope Database (IEDB) weekly automated benchmark datasets. We further apply our model to predict binding on the human genome and identify 15 genes with potential for self binding. Codes to generate the HLA-Vec and HLA-CNN are publicly available at: https://github.com/uci-cbcl/HLA-bind . xhx@ics.uci.edu. Supplementary data are available at Bioinformatics online.

  6. Fully automated quantitative cephalometry using convolutional neural networks.

    PubMed

    Arık, Sercan Ö; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Quantitative cephalometry plays an essential role in clinical diagnosis, treatment, and surgery. Development of fully automated techniques for these procedures is important to enable consistently accurate computerized analyses. We study the application of deep convolutional neural networks (CNNs) for fully automated quantitative cephalometry for the first time. The proposed framework utilizes CNNs for detection of landmarks that describe the anatomy of the depicted patient and yield quantitative estimation of pathologies in the jaws and skull base regions. We use a publicly available cephalometric x-ray image dataset to train CNNs for recognition of landmark appearance patterns. CNNs are trained to output probabilistic estimations of different landmark locations, which are combined using a shape-based model. We evaluate the overall framework on the test set and compare with other proposed techniques. We use the estimated landmark locations to assess anatomically relevant measurements and classify them into different anatomical types. Overall, our results demonstrate high anatomical landmark detection accuracy ([Formula: see text] to 2% higher success detection rate for a 2-mm range compared with the top benchmarks in the literature) and high anatomical type classification accuracy ([Formula: see text] average classification accuracy for test set). We demonstrate that CNNs, which merely input raw image patches, are promising for accurate quantitative cephalometry.

  7. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    NASA Astrophysics Data System (ADS)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  8. Very Deep Convolutional Neural Networks for Morphologic Classification of Erythrocytes.

    PubMed

    Durant, Thomas J S; Olson, Eben M; Schulz, Wade L; Torres, Richard

    2017-09-06

    Morphologic profiling of the erythrocyte population is a widely used and clinically valuable diagnostic modality, but one that relies on a slow manual process associated with significant labor cost and limited reproducibility. Automated profiling of erythrocytes from digital images by capable machine learning approaches would augment the throughput and value of morphologic analysis. To this end, we sought to evaluate the performance of leading implementation strategies for convolutional neural networks (CNNs) when applied to classification of erythrocytes based on morphology. Erythrocytes were manually classified into 1 of 10 classes using a custom-developed Web application. Using recent literature to guide architectural considerations for neural network design, we implemented a "very deep" CNN, consisting of >150 layers, with dense shortcut connections. The final database comprised 3737 labeled cells. Ensemble model predictions on unseen data demonstrated a harmonic mean of recall and precision metrics of 92.70% and 89.39%, respectively. Of the 748 cells in the test set, 23 misclassification errors were made, with a correct classification frequency of 90.60%, represented as a harmonic mean across the 10 morphologic classes. These findings indicate that erythrocyte morphology profiles could be measured with a high degree of accuracy with "very deep" CNNs. Further, these data support future efforts to expand classes and optimize practical performance in a clinical environment as a prelude to full implementation as a clinical tool. © 2017 American Association for Clinical Chemistry.

  9. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    PubMed

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Building Extraction from Remote Sensing Data Using Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; Cui, S.; Reinartz, P.

    2017-05-01

    Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.

  11. Study of multispectral convolution scatter correction in high resolution PET

    SciTech Connect

    Yao, R.; Lecomte, R.; Bentourkia, M.

    1996-12-31

    PET images acquired with a high resolution scanner based on arrays of small discrete detectors are obtained at the cost of low sensitivity and increased detector scatter. It has been postulated that these limitations can be overcome by using enlarged discrimination windows to include more low energy events and by developing more efficient energy-dependent methods to correct for scatter. In this work, we investigate one such method based on the frame-by-frame scatter correction of multispectral data. Images acquired in the conventional, broad and multispectral window modes were processed by the stationary and nonstationary consecutive convolution scatter correction methods. Broad and multispectral window acquisition with a low energy threshold of 129 keV improved system sensitivity by up to 75% relative to conventional window with a {approximately}350 keV threshold. The degradation of image quality due to the added scatter events can almost be fully recovered by the subtraction-restoration scatter correction. The multispectral method was found to be more sensitive to the nonstationarity of scatter and its performance was not as good as that of the broad window. It is concluded that new scatter degradation models and correction methods need to be established to fully take advantage of multispectral data.

  12. Fast convolution-superposition dose calculation on graphics hardware.

    PubMed

    Hissoiny, Sami; Ozell, Benoît; Després, Philippe

    2009-06-01

    The numerical calculation of dose is central to treatment planning in radiation therapy and is at the core of optimization strategies for modern delivery techniques. In a clinical environment, dose calculation algorithms are required to be accurate and fast. The accuracy is typically achieved through the integration of patient-specific data and extensive beam modeling, which generally results in slower algorithms. In order to alleviate execution speed problems, the authors have implemented a modern dose calculation algorithm on a massively parallel hardware architecture. More specifically, they have implemented a convolution-superposition photon beam dose calculation algorithm on a commodity graphics processing unit (GPU). They have investigated a simple porting scenario as well as slightly more complex GPU optimization strategies. They have achieved speed improvement factors ranging from 10 to 20 times with GPU implementations compared to central processing unit (CPU) implementations, with higher values corresponding to larger kernel and calculation grid sizes. In all cases, they preserved the numerical accuracy of the GPU calculations with respect to the CPU calculations. These results show that streaming architectures such as GPUs can significantly accelerate dose calculation algorithms and let envision benefits for numerically intensive processes such as optimizing strategies, in particular, for complex delivery techniques such as IMRT and are therapy.

  13. Infimal convolution of total generalized variation functionals for dynamic MRI.

    PubMed

    Schloegl, Matthias; Holler, Martin; Schwarzl, Andreas; Bredies, Kristian; Stollberger, Rudolf

    2017-07-01

    To accelerate dynamic MR applications using infimal convolution of total generalized variation functionals (ICTGV) as spatio-temporal regularization for image reconstruction. ICTGV comprises a new image prior tailored to dynamic data that achieves regularization via optimal local balancing between spatial and temporal regularity. Here it is applied for the first time to the reconstruction of dynamic MRI data. CINE and perfusion scans were investigated to study the influence of time dependent morphology and temporal contrast changes. ICTGV regularized reconstruction from subsampled MR data is formulated as a convex optimization problem. Global solutions are obtained by employing a duality based non-smooth optimization algorithm. The reconstruction error remains on a low level with acceleration factors up to 16 for both CINE and dynamic contrast-enhanced MRI data. The GPU implementation of the algorithm suites clinical demands by reducing reconstruction times of one dataset to less than 4 min. ICTGV based dynamic magnetic resonance imaging reconstruction allows for vast undersampling and therefore enables for very high spatial and temporal resolutions, spatial coverage and reduced scan time. With the proposed distinction of model and regularization parameters it offers a new and robust method of flexible decomposition into components with different degrees of temporal regularity. Magn Reson Med 78:142-155, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Infimal Convolution of Total Generalized Variation Functionals for Dynamic MRI

    PubMed Central

    Schloegl, Matthias; Holler, Martin; Schwarzl, Andreas; Bredies, Kristian; Stollberger, Rudolf

    2017-01-01

    Purpose To accelerate dynamic MR applications using infimal convolution of total generalized variation functionals (ICTGV) as spatio-temporal regularization for image reconstruction. Theory and Methods ICTGV comprises a new image prior tailored to dynamic data that achieves regularization via optimal local balancing between spatial and temporal regularity. Here it is applied for the first time to the reconstruction of dynamic MRI data. CINE and perfusion scans were investigated to study the influence of time dependent morphology and temporal contrast changes. ICTGV regularized reconstruction from subsampled MR data is formulated as a convex optimization problem. Global solutions are obtained by employing a duality based non-smooth optimization algorithm. Results The reconstruction error remains on a low level with acceleration factors up to 16 for both CINE and dynamic contrast-enhanced MRI data. The GPU implementation of the algorithm suites clinical demands by reducing reconstruction times of one dataset to less than 4 min. Conclusion ICTGV based dynamic magnetic resonance imaging reconstruction allows for vast undersampling and therefore enables for very high spatial and temporal resolutions, spatial coverage and reduced scan time. With the proposed distinction of model and regularization parameters it offers a new and robust method of flexible decomposition into components with different degrees of temporal regularity. PMID:27476450

  15. Medtentia double helix mitral annuloplasty system evaluated in a porcine experimental model.

    PubMed

    Jensen, Henrik; Simpanen, Jarmo; Smerup, Morten; Bjerre, Marianne; Bramsen, Morten; Werkkala, Kalervo; Vainikka, Tiina; Hasenkam, J Michael; Wierup, Per

    2010-03-01

    : To further develop and improve minimally invasive surgical procedures, dedicated appropriate surgical devices are mandatory. In this study, the safety and feasibility of implanting the novel Medtentia double helix mitral annuloplasty ring, which uses the key-ring principle to potentially allow faster and sutureless implantation, was assessed using both minimally invasive and conventional surgical techniques. Because of ethical concerns, a human compatible porcine experimental model of mitral valve surgery was used. : Twelve 50-kg pigs were allocated to implantation of the Medtentia double helix annuloplasty ring using conventional midline sternotomy including cardioplegic arrest or a minimally invasive approach using peripheral cannulation and left ventricular fibrillation. Ten weeks after surgery, echocardiography was performed to assess mitral valve function. Animals were then killed, and gross mitral valve anatomy was examined ex vivo. : All animals survived 10 weeks without developing mitral regurgitation, structural leaflet damage, ring dehiscence, or endocarditis. In the minimally invasive compared with the midline sternotomy group (mean ± SD), significantly reduced recovery time (80 ± 16 vs. 327 ± 23 minutes, P < 0.01) and a tendency toward increased operating time (199 ± 33 vs. 168 ± 15 minutes, P > 0.05) and cardiopulmonary bypass time (98 ± 12 vs. 91 ± 11 minutes, P > 0.05) were observed. : By using a both minimally invasive and conventional midline sternotomy implantation techniques, the Medtentia double helix annuloplasty ring showed no mitral valve dysfunction or tissue damage 10 weeks postoperatively.

  16. Modelling of side-wall angle for optical proximity correction for self-aligned double patterning

    NASA Astrophysics Data System (ADS)

    Moulis, Sylvain; Farys, Vincent; Belledent, Jérôme; Foucher, Johann

    2012-03-01

    The pursuit of even smaller transistors has pushed some technological innovations in the field of lithography. In order to continue following the path of Moore's law, several solutions were proposed: EUV, e-beam and double patterning lithography. As EUV and e-beam lithography are still not ready for mass production for 20nm and 14nm nodes, double patterning lithography will play an important role for these nodes. In this work, we had focused on Self- Aligned Double-Patterning processes which consist in depositing a spacer material on each side of a mandrel exposed during a first lithography stepmaking the pitch to be divided by two after transfer into the substrate, the cutting of unwanted patterns being addressed through a second lithography exposure. In the specific case where spacers are deposited directly on the flanks of the resist, it is crucial to control its profiles as it could induce final CD errors or even spacer collapse. In this work, we will first study with a simple model the influence of the resist profile on the post-etch spacer CD. Then we will show that the placement of Sub-Resolution Assist Features (SRAF) can influence the resist profile and finally, we will see how much control of the spacer and inter-spacer CD we can achieve by tuning SRAF placement.

  17. Inequalities and consequences of new convolutions for the fractional Fourier transform with Hermite weights

    NASA Astrophysics Data System (ADS)

    Anh, P. K.; Castro, L. P.; Thao, P. T.; Tuan, N. M.

    2017-01-01

    This paper presents new convolutions for the fractional Fourier transform which are somehow associated with the Hermite functions. Consequent inequalities and properties are derived for these convolutions, among which we emphasize two new types of Young's convolution inequalities. The results guarantee a general framework where the present convolutions are well-defined, allowing larger possibilities than the known ones for other convolutions. Furthermore, we exemplify the use of our convolutions by providing explicit solutions of some classes of integral equations which appear in engineering problems.

  18. Numerical Well Testing Interpretation Model and Applications in Crossflow Double-Layer Reservoirs by Polymer Flooding

    PubMed Central

    Guo, Hui; He, Youwei; Li, Lei; Du, Song; Cheng, Shiqing

    2014-01-01

    This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR). PMID:25302335

  19. Analytical model of LDMOS with a double step buried oxide layer

    NASA Astrophysics Data System (ADS)

    Yuan, Song; Duan, Baoxing; Cao, Zhen; Guo, Haijun; Yang, Yintang

    2016-09-01

    In this paper, a two-dimensional analytical model is established for the Buried Oxide Double Step Silicon On Insulator structure proposed by the authors. Based on the two-dimensional Poisson equation, the analytic expressions of the surface electric field and potential distributions for the device are achieved. In the BODS (Buried Oxide Double Step Silicon On Insulator) structure, the buried oxide layer thickness changes stepwise along the drift region, and the positive charge in the drift region can be accumulated at the corner of the step. These accumulated charge function as the space charge in the depleted drift region. At the same time, the electric field in the oxide layer also varies with the different drift region thickness. These variations especially the accumulated charge will modulate the surface electric field distribution through the electric field modulation effects, which makes the surface electric field distribution more uniform. As a result, the breakdown voltage of the device is improved by 30% compared with the conventional SOI structure. To verify the accuracy of the analytical model, the device simulation software ISE TCAD is utilized, the analytical values are in good agreement with the simulation results by the simulation software. That means the established two-dimensional analytical model for BODS structure is valid, and it also illustrates the breakdown voltage enhancement by the electric field modulation effect sufficiently. The established analytical models will provide the physical and mathematical basis for further analysis of the new power devices with the patterned buried oxide layer.

  20. Poisson-Helmholtz-Boltzmann model of the electric double layer: analysis of monovalent ionic mixtures.

    PubMed

    Bohinc, Klemen; Shrestha, Ahis; Brumen, Milan; May, Sylvio

    2012-03-01

    In the classical mean-field description of the electric double layer, known as the Poisson-Boltzmann model, ions interact exclusively through their Coulomb potential. Ion specificity can arise through solvent-mediated, nonelectrostatic interactions between ions. We employ the Yukawa pair potential to model the presence of nonelectrostatic interactions. The combination of Yukawa and Coulomb potential on the mean-field level leads to the Poisson-Helmholtz-Boltzmann model, which employs two auxiliary potentials: one electrostatic and the other nonelectrostatic. In the present work we apply the Poisson-Helmholtz-Boltzmann model to ionic mixtures, consisting of monovalent cations and anions that exhibit different Yukawa interaction strengths. As a specific example we consider a single charged surface in contact with a symmetric monovalent electrolyte. From the minimization of the mean-field free energy we derive the Poisson-Boltzmann and Helmholtz-Boltzmann equations. These nonlinear equations can be solved analytically in the weak perturbation limit. This together with numerical solutions in the nonlinear regime suggests an intricate interplay between electrostatic and nonelectrostatic interactions. The structure and free energy of the electric double layer depends sensitively on the Yukawa interaction strengths between the different ion types and on the nonelectrostatic interactions of the mobile ions with the surface.

  1. Numerical well testing interpretation model and applications in crossflow double-layer reservoirs by polymer flooding.

    PubMed

    Yu, Haiyang; Guo, Hui; He, Youwei; Xu, Hainan; Li, Lei; Zhang, Tiantian; Xian, Bo; Du, Song; Cheng, Shiqing

    2014-01-01

    This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR).

  2. Atomistic simulation of nanoporous layered double hydroxide materials and their properties. I. Structural modeling.

    PubMed

    Kim, Nayong; Kim, Yongman; Tsotsis, Theodore T; Sahimi, Muhammad

    2005-06-01

    An atomistic model of layered double hydroxides, an important class of nanoporous materials, is presented. These materials have wide applications, ranging from adsorbents for gases and liquid ions to nanoporous membranes and catalysts. They consist of two types of metallic cations that are accommodated by a close-packed configuration of OH- and other anions in a positively charged brucitelike layer. Water and various anions are distributed in the interlayer space for charge compensation. A modified form of the consistent-valence force field, together with energy minimization and molecular dynamics simulations, is utilized for developing an atomistic model of the materials. To test the accuracy of the model, we compare the vibrational frequencies, x-ray diffraction patterns, and the basal spacing of the material, computed using the atomistic model, with our experimental data over a wide range of temperature. Good agreement is found between the computed and measured quantities.

  3. Atomistic simulation of nanoporous layered double hydroxide materials and their properties. I. Structural modeling

    NASA Astrophysics Data System (ADS)

    Kim, Nayong; Kim, Yongman; Tsotsis, Theodore T.; Sahimi, Muhammad

    2005-06-01

    An atomistic model of layered double hydroxides, an important class of nanoporous materials, is presented. These materials have wide applications, ranging from adsorbents for gases and liquid ions to nanoporous membranes and catalysts. They consist of two types of metallic cations that are accommodated by a close-packed configuration of OH- and other anions in a positively charged brucitelike layer. Water and various anions are distributed in the interlayer space for charge compensation. A modified form of the consistent-valence force field, together with energy minimization and molecular dynamics simulations, is utilized for developing an atomistic model of the materials. To test the accuracy of the model, we compare the vibrational frequencies, x-ray diffraction patterns, and the basal spacing of the material, computed using the atomistic model, with our experimental data over a wide range of temperature. Good agreement is found between the computed and measured quantities.

  4. Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel

    NASA Astrophysics Data System (ADS)

    Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.

    1989-07-01

    A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.

  5. Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel

    NASA Technical Reports Server (NTRS)

    Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.

    1989-01-01

    A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.

  6. Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel

    NASA Technical Reports Server (NTRS)

    Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.

    1989-01-01

    A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.

  7. Double-sigmoid model for fitting fatigue profiles in mouse fast- and slow-twitch muscle.

    PubMed

    Cairns, S P; Robinson, D M; Loiselle, D S

    2008-07-01

    We present a curve-fitting approach that permits quantitative comparisons of fatigue profiles obtained with different stimulation protocols in isolated slow-twitch soleus and fast-twitch extensor digitorum longus (EDL) muscles of mice. Profiles from our usual stimulation protocol (125 Hz for 500 ms, evoked once every second for 100-300 s) could be fitted by single-term functions (sigmoids or exponentials) but not by a double exponential. A clearly superior fit, as confirmed by the Akaiki Information Criterion, was achieved using a double-sigmoid function. Fitting accuracy was exceptional; mean square errors were typically <1% and r(2) > 0.9995. The first sigmoid (early fatigue) involved approximately 10% decline of isometric force to an intermediate plateau in both muscle types; the second sigmoid (late fatigue) involved a reduction of force to a final plateau, the decline being 83% of initial force in EDL and 63% of initial force in soleus. The maximal slope of each sigmoid was seven- to eightfold greater in EDL than in soleus. The general applicability of the model was tested by fitting profiles with a severe force loss arising from repeated tetanic stimulation evoked at different frequencies or rest periods, or with excitation via nerve terminals in soleus. Late fatigue, which was absent at 30 Hz, occurred earlier and to a greater extent at 125 than 50 Hz. The model captured small changes in rate of late fatigue for nerve terminal versus sarcolemmal stimulation. We conclude that a double-sigmoid expression is a useful and accurate model to characterize fatigue in isolated muscle preparations.

  8. Modeling the double charge exchange response function for a tetraneutron system

    NASA Astrophysics Data System (ADS)

    Lazauskas, R.; Carbonell, J.; Hiyama, E.

    2017-07-01

    This work is an attempt to model the 4 n response function of a recent RIKEN experimental study of the double charge exchange 4 He(8 He,8 Be)4n reaction in order to put in evidence an eventual enhancement mechanism of the zero-energy cross section, including a near-threshold resonance. This resonance can indeed be reproduced only by adding to the standard nuclear Hamiltonian an unphysically large T =3/2 attractive 3 n -force that destroys the neighboring nuclear chart. No other mechanisms, like cusps or related structures, were found.

  9. Theoretical model for a background noise limited laser-excited optical filter for doubled Nd lasers

    NASA Astrophysics Data System (ADS)

    Shay, Thomas M.; Garcia, Daniel F.

    1990-06-01

    A simple theoretical model for the calculation of the dependence of filter quantum efficiency versus laser pump power in an atomic Rb vapor laser-excited optical filter is reported. Calculations for Rb filter transitions that can be used to detect the practical and important frequency-doubled Nd lasers are presented. The results of these calculations show the filter's quantum efficiency versus the laser pump power. The required laser pump powers required range from 2.4 to 60 mW/sq cm of filter aperture.

  10. Experimental investigation of shock wave diffraction over a single- or double-sphere model

    NASA Astrophysics Data System (ADS)

    Zhang, L. T.; Wang, T. H.; Hao, L. N.; Huang, B. Q.; Chen, W. J.; Shi, H. H.

    2017-01-01

    In this study, the unsteady drag produced by the interaction of a shock wave with a single- and a double-sphere model is measured using imbedded accelerometers. The shock wave is generated in a horizontal circular shock tube with an inner diameter of 200 mm. The effect of the shock Mach number and the dimensionless distance between spheres is investigated. The time-history of the drag coefficient is obtained based on Fast Fourier Transformation (FFT) band-block filtering and polynomial fitting of the measured acceleration. The measured peak values of the drag coefficient, with the associated uncertainty, are reported.

  11. Double pendulum model for a tennis stroke including a collision process

    NASA Astrophysics Data System (ADS)

    Youn, Sun-Hyun

    2015-10-01

    By means of adding a collision process between the ball and racket in the double pendulum model, we analyzed the tennis stroke. The ball and the racket system may be accelerated during the collision time; thus, the speed of the rebound ball does not simply depend on the angular velocity of the racket. A higher angular velocity sometimes gives a lower rebound ball speed. We numerically showed that the proper time-lagged racket rotation increased the speed of the rebound ball by 20%. We also showed that the elbow should move in the proper direction in order to add the angular velocity of the racket.

  12. Numerical modeling of Subthreshold region of junctionless double surrounding gate MOSFET (JLDSG)

    NASA Astrophysics Data System (ADS)

    Rewari, Sonam; Haldar, Subhasis; Nath, Vandana; Deswal, S. S.; Gupta, R. S.

    2016-02-01

    In this paper, Numerical Model for Electric Potential, Subthreshold Current and Subthreshold Swing for Junctionless Double Surrounding Gate(JLDSG) MOSFEThas been developed using superposition method. The results have also been evaluated for different silicon film thickness, oxide film thickness and channel length. The numerical results so obtained are in good agreement with the simulated data. Also, the results of JLDSG MOSFET have been compared with the conventional Junctionless Surrounding Gate (JLSG) MOSFET and it is observed that JLDSG MOSFET has improved drain currents, transconductance, outputconductance, Transconductance Generation Factor (TGF) and Subthreshold Slope.

  13. Theoretical model for a background noise limited laser-excited optical filter for doubled Nd lasers

    NASA Technical Reports Server (NTRS)

    Shay, Thomas M.; Garcia, Daniel F.

    1990-01-01

    A simple theoretical model for the calculation of the dependence of filter quantum efficiency versus laser pump power in an atomic Rb vapor laser-excited optical filter is reported. Calculations for Rb filter transitions that can be used to detect the practical and important frequency-doubled Nd lasers are presented. The results of these calculations show the filter's quantum efficiency versus the laser pump power. The required laser pump powers required range from 2.4 to 60 mW/sq cm of filter aperture.

  14. Numerical model of wind-induced entrainment in a double-diffusive thermohaline system

    SciTech Connect

    Hullender, T.A.; Laster, W.R. . School of Mechanical Engineering)

    1994-01-01

    A low Reynolds number k-[epsilon] model has been used to predict the wind-induced entrainment in a double-diffusive system. The calculated results are compared with experimental results from wind-induced entrainment in a finite length tank and with shear-induced entrainment in an annular tank. Overall agreement is good for wind speeds less than 10 m/s. Above this value, multidimensional effects tend to dominate. The scale of the turbulence at the surface is found to significantly affect the entrainment rate. This indicates that the suppression of waves on the surface can significantly reduce the rate of entrainment.

  15. Spatially variant convolution with scaled B-splines.

    PubMed

    Muñoz-Barrutia, Arrate; Artaechevarria, Xabier; Ortiz-de-Solorzano, Carlos

    2010-01-01

    We present an efficient algorithm to compute multidimensional spatially variant convolutions--or inner products--between N-dimensional signals and B-splines--or their derivatives--of any order and arbitrary sizes. The multidimensional B-splines are computed as tensor products of 1-D B-splines, and the input signal is expressed in a B-spline basis. The convolution is then computed by using an adequate combination of integration and scaled finite differences as to have, for moderate and large scale values, a computational complexity that does not depend on the scaling factor. To show in practice the benefit of using our spatially variant convolution approach, we present an adaptive noise filter that adjusts the kernel size to the local image characteristics and a high sensitivity local ridge detector.

  16. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  17. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  18. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  19. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  20. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  1. Predicting polarization signatures for double-detonation and delayed-detonation models of Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Bulla, M.; Sim, S. A.; Kromer, M.; Seitenzahl, I. R.; Fink, M.; Ciaraldi-Schoolmann, F.; Röpke, F. K.; Hillebrandt, W.; Pakmor, R.; Ruiter, A. J.; Taubenberger, S.

    2016-10-01

    Calculations of synthetic spectropolarimetry are one means to test multidimensional explosion models for Type Ia supernovae. In a recent paper, we demonstrated that the violent merger of a 1.1 and 0.9 M⊙ white dwarf binary system is too asymmetric to explain the low polarization levels commonly observed in normal Type Ia supernovae. Here, we present polarization simulations for two alternative scenarios: the sub-Chandrasekhar mass double-detonation and the Chandrasekhar mass delayed-detonation model. Specifically, we study a 2D double-detonation model and a 3D delayed-detonation model, and calculate polarization spectra for multiple observer orientations in both cases. We find modest polarization levels (<1 per cent) for both explosion models. Polarization in the continuum peaks at ˜0.1-0.3 per cent and decreases after maximum light, in excellent agreement with spectropolarimetric data of normal Type Ia supernovae. Higher degrees of polarization are found across individual spectral lines. In particular, the synthetic Si II λ6355 profiles are polarized at levels that match remarkably well the values observed in normal Type Ia supernovae, while the low degrees of polarization predicted across the O I λ7774 region are consistent with the non-detection of this feature in current data. We conclude that our models can reproduce many of the characteristics of both flux and polarization spectra for well-studied Type Ia supernovae, such as SN 2001el and SN 2012fr. However, the two models considered here cannot account for the unusually high level of polarization observed in extreme cases such as SN 2004dt.

  2. Modeling and simulation study of novel Double Gate Ferroelectric Junctionless (DGFJL) transistor

    NASA Astrophysics Data System (ADS)

    Mehta, Hema; Kaur, Harsupreet

    2016-09-01

    In this work we have proposed an analytical model for Double Gate Ferroelectric Junctionless Transistor (DGFJL), a novel device, which incorporates the advantages of both Junctionless (JL) transistor and Negative Capacitance phenomenon. A complete drain current model has been developed by using Landau-Khalatnikov equation and parabolic potential approximation to analyze device behavior in different operating regions. It has been demonstrated that DGFJL transistor acts as a step-up voltage transformer and exhibits subthreshold slope values less than 60 mV/dec. In order to assess the advantages offered by the proposed device, extensive comparative study has been done with equivalent Double Gate Junctionless Transistor (DGJL) transistor with gate insulator thickness same as ferroelectric gate stack thickness of DGFJL transistor. It is shown that incorporation of ferroelectric layer can overcome the variability issues observed in JL transistors. The device has been studied over a wide range of parameters and bias conditions to comprehensively investigate the device design guidelines to obtain a better insight into the application of DGFJL as a potential candidate for future technology nodes. The analytical results so derived from the model have been verified with simulated results obtained using ATLAS TCAD simulator and a good agreement has been found.

  3. Simulation of the conformation and dynamics of a double-helical model for DNA.

    PubMed Central

    Huertas, M L; Navarro, S; Lopez Martinez, M C; García de la Torre, J

    1997-01-01

    We propose a partially flexible, double-helical model for describing the conformational and dynamic properties of DNA. In this model, each nucleotide is represented by one element (bead), and the known geometrical features of the double helix are incorporated in the equilibrium conformation. Each bead is connected to a few neighbor beads in both strands by means of stiff springs that maintain the connectivity but still allow for some extent of flexibility and internal motion. We have used Brownian dynamics simulation to sample the conformational space and monitor the overall and internal dynamics of short DNA pieces, with up to 20 basepairs. From Brownian trajectories, we calculate the dimensions of the helix and estimate its persistence length. We obtain translational diffusion coefficient and various rotational relaxation times, including both overall rotation and internal motion. Although we have not carried out a detailed parameterization of the model, the calculated properties agree rather well with experimental data available for those oligomers. Images FIGURE 3 PMID:9414226

  4. Application of a site-binding, electrical, double-layer model to nuclear waste disposal

    SciTech Connect

    Relyea, J.F.; Silva, R.J.

    1981-09-01

    A site-binding, electrical, double-layer adsorption model has been applied to adsorption of Cs for both a montmorillonite clay and powdered SiO/sub 2/. Agreement between experimental and predicted results indicates that C/sub s//sup +/ is adsorbed by a simple cation-exchange mechanism. Further application of a combination equilibrium thermodynamic model and site-binding, electrical, double-layer adsorption model has been made to predict the behavior of U(VI) in solutions contacting either the montmorillonite clay or powdered SiO/sub 2/. Experimentally determined U solution concentrations have been used to select what is felt to be the best available thermodynamic data for U under oxidizing conditions. Given the existing information about the probable U solution species, it was possible to determine that UO/sub 2//sup +2/ is most likely adsorbed by cation-exchange at pH 5. At higher values (pH 7 and 9), it was shown that UO/sub 2/(OH)/sub 2//sup 0/ is probably the most strongly adsorbed U solution species. It was also found that high NaCl solution concentrations at higher pH values lowered U concentrations (either because of enhanced sorption or lowered solubility); however, the mechanism responsible for this behavior has not been determined.

  5. Nuclear mean field and double-folding model of the nucleus-nucleus optical potential

    NASA Astrophysics Data System (ADS)

    Khoa, Dao T.; Phuc, Nguyen Hoang; Loan, Doan Thi; Loc, Bui Minh

    2016-09-01

    Realistic density dependent CDM3Yn versions of the M3Y interaction have been used in an extended Hartree-Fock (HF) calculation of nuclear matter (NM), with the nucleon single-particle potential determined from the total NM energy based on the Hugenholtz-van Hove theorem that gives rise naturally to a rearrangement term (RT). Using the RT of the single-nucleon potential obtained exactly at different NM densities, the density and energy dependence of the CDM3Yn interactions was modified to account properly for both the RT and observed energy dependence of the nucleon optical potential. Based on a local density approximation, the double-folding model of the nucleus-nucleus optical potential has been extended to take into account consistently the rearrangement effect and energy dependence of the nuclear mean-field potential, using the modified CDM3Yn interactions. The extended double-folding model was applied to study the elastic 12C+12C and 16O+12C scattering at the refractive energies, where the Airy structure of the nuclear rainbow has been well established. The RT was found to affect significantly the real nucleus-nucleus optical potential at small internuclear distances, giving a potential strength close to that implied by the realistic optical model description of the Airy oscillation.

  6. A bilayer Double Semion model with symmetry-enriched topological order

    NASA Astrophysics Data System (ADS)

    Ortiz, L.; Martin-Delgado, M. A.

    2016-12-01

    We construct a new model of two-dimensional quantum spin systems that combines intrinsic topological orders and a global symmetry called flavour symmetry. It is referred as the bilayer Doubled Semion model (bDS) and is an instance of symmetry-enriched topological order. A honeycomb bilayer lattice is introduced to combine a Double Semion Topological Order with a global spin-flavour symmetry to get the fractionalization of its quasiparticles. The bDS model exhibits non-trivial braiding self-statistics of excitations and its dual model constitutes a Symmetry-Protected Topological Order with novel edge states. This dual model gives rise to a bilayer Non-Trivial Paramagnet that is invariant under the flavour symmetry and the well-known spin flip symmetry. On the one hand, the Hermele model is constructed with a square lattice in a multilayer structure that forms a quasi-three-dimensional model, but the square lattice cannot support a DS model. (see Appendix C and [39]). On the other hand, the Levin-Gu method is realized on a single hexagonal layer, but we would need a multilayer realization of that construction. This is problematic since the necessary coordination condition (3) is incompatible with a multilayer structure of honeycomb layers. Interestingly enough, we can rephrase this compatibility problem between these two fractionalization methods as a compatibility condition between global symmetries. The key point is to realize that the Levin-Gu method deals with a spin-flip symmetry, e.g. G = Z2fs, explicitly shown in the spin model introduced in Section 4, while the Hermele method is about a spin-flavour symmetry among lattice layers, e.g. G = Z2fv. This spin-favour symmetry is present explicitly in the string model presented in Eq. (26).We hereby summarize briefly some of our main results:i/ We have constructed a bilayer Doubled Semion (bDS) model that has intrinsic topological orders of type G =Z2 and is

  7. A DOUBLE-RING ALGORITHM FOR MODELING SOLAR ACTIVE REGIONS: UNIFYING KINEMATIC DYNAMO MODELS AND SURFACE FLUX-TRANSPORT SIMULATIONS

    SciTech Connect

    Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu; Yeates, Anthony R. E-mail: dnandi@iiserkol.ac.i E-mail: anthony@maths.dundee.ac.u

    2010-09-01

    The emergence of tilted bipolar active regions (ARs) and the dispersal of their flux, mediated via processes such as diffusion, differential rotation, and meridional circulation, is believed to be responsible for the reversal of the Sun's polar field. This process (commonly known as the Babcock-Leighton mechanism) is usually modeled as a near-surface, spatially distributed {alpha}-effect in kinematic mean-field dynamo models. However, this formulation leads to a relationship between polar field strength and meridional flow speed which is opposite to that suggested by physical insight and predicted by surface flux-transport simulations. With this in mind, we present an improved double-ring algorithm for modeling the Babcock-Leighton mechanism based on AR eruption, within the framework of an axisymmetric dynamo model. Using surface flux-transport simulations, we first show that an axisymmetric formulation-which is usually invoked in kinematic dynamo models-can reasonably approximate the surface flux dynamics. Finally, we demonstrate that our treatment of the Babcock-Leighton mechanism through double-ring eruption leads to an inverse relationship between polar field strength and meridional flow speed as expected, reconciling the discrepancy between surface flux-transport simulations and kinematic dynamo models.

  8. Preliminary results from a four-working space, double-acting piston, Stirling engine controls model

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1980-01-01

    A four working space, double acting piston, Stirling engine simulation is being developed for controls studies. The development method is to construct two simulations, one for detailed fluid behavior, and a second model with simple fluid behaviour but containing the four working space aspects and engine inertias, validate these models separately, then upgrade the four working space model by incorporating the detailed fluid behaviour model for all four working spaces. The single working space (SWS) model contains the detailed fluid dynamics. It has seven control volumes in which continuity, energy, and pressure loss effects are simulated. Comparison of the SWS model with experimental data shows reasonable agreement in net power versus speed characteristics for various mean pressure levels in the working space. The four working space (FWS) model was built to observe the behaviour of the whole engine. The drive dynamics and vehicle inertia effects are simulated. To reduce calculation time, only three volumes are used in each working space and the gas temperature are fixed (no energy equation). Comparison of the FWS model predicted power with experimental data shows reasonable agreement. Since all four working spaces are simulated, the unique capabilities of the model are exercised to look at working fluid supply transients, short circuit transients, and piston ring leakage effects.

  9. Double-layer parallelization for hydrological model calibration on HPC systems

    NASA Astrophysics Data System (ADS)

    Zhang, Ang; Li, Tiejian; Si, Yuan; Liu, Ronghua; Shi, Haiyun; Li, Xiang; Li, Jiaye; Wu, Xia

    2016-04-01

    Large-scale problems that demand high precision have remarkably increased the computational time of numerical simulation models. Therefore, the parallelization of models has been widely implemented in recent years. However, computing time remains a major challenge when a large model is calibrated using optimization techniques. To overcome this difficulty, we proposed a double-layer parallel system for hydrological model calibration using high-performance computing (HPC) systems. The lower-layer parallelism is achieved using a hydrological model, the Digital Yellow River Integrated Model, which was parallelized by decomposing river basins. The upper-layer parallelism is achieved by simultaneous hydrological simulations with different parameter combinations in the same generation of the genetic algorithm and is implemented using the job scheduling functions of an HPC system. The proposed system was applied to the upstream of the Qingjian River basin, a sub-basin of the middle Yellow River, to calibrate the model effectively by making full use of the computing resources in the HPC system and to investigate the model's behavior under various parameter combinations. This approach is applicable to most of the existing hydrology models for many applications.

  10. Single-center model for double photoionization of the H{sub 2} molecule

    SciTech Connect

    Kheifets, A.S.

    2005-02-01

    We present a single-center model of double photoionization (DPI) of the H{sub 2} molecule which combines a multiconfiguration expansion of the molecular ground state with the convergent close-coupling description of the two-electron continuum. Because the single-center final-state wave function is only correct in the asymptotic region of large distances, the model cannot predict the magnitude of the DPI cross sections. However, we expect the model to account for the angular correlation in the two-electron continuum and to reproduce correctly the shape of the fully differential DPI cross sections. We test this assumption in kinematics of recent DPI experiments on the randomly oriented and fixed in space hydrogen molecule in the isotopic form of D{sub 2}.

  11. Shell-Model Calculations of Two-Nucleon Tansfer Related to Double Beta Decay

    NASA Astrophysics Data System (ADS)

    Brown, Alex

    2013-10-01

    I will discuss theoretical results for two-nucleon transfer cross sections for nuclei in the regions of 48Ca, 76Ge and 136Xe of interest for testing the wavefuntions used for the nuclear matrix elements in double-beta decay. Various reaction models are used. A simple cluster transfer model gives relative cross sections. Thompson's code Fresco with direct and sequential transfer is used for absolute cross sections. Wavefunctions are obtained in large-basis proton-neutron coupled model spaces with the code NuShellX with realistic effecive Hamiltonians such as those used for the recent results for 136Xe [M. Horoi and B. A. Brown, Phys. Rev. Lett. 110, 222502 (2013)]. I acknowledge support from NSF grant PHY-1068217.

  12. Minimal model for double diffusion and its application to Kivu, Nyos, and Powell Lake

    NASA Astrophysics Data System (ADS)

    Toffolon, Marco; Wüest, Alfred; Sommer, Tobias

    2015-09-01

    Double diffusion originates from the markedly different molecular diffusion rates of heat and salt in water, producing staircase structures under favorable conditions. The phenomenon essentially consists of two processes: molecular diffusion across sharp interfaces and convective transport in the gravitationally unstable layers. In this paper, we propose a model that is based on the one-dimensional description of these two processes only, and—by self-organization—is able to reproduce both the large-scale dynamics and the structure of individual layers, while accounting for different boundary conditions. Two parameters characterize the model, describing the time scale for the formation of unstable water parcels and the optimal spatial resolution. Theoretical relationships allow for the identification of the influence of these parameters on the layer structure and on the mass and heat fluxes. The performances of the model are tested for three different lakes (Powell, Kivu, and Nyos), showing a remarkable agreement with actual microstructure measurements.

  13. Effects of a random porosity model on double diffusive natural convection in a porous medium enclosure

    SciTech Connect

    Fu, W.S.; Ke, W.W.

    2000-01-01

    A double diffusive natural convection in a rectangular enclosure filled with porous medium is investigated numerically. The distribution of porosity is based upon the random porosity model. The Darcy-Brinkman-Forchheimer model is used and the factors of heat flux, mean porosity and standard deviation are taken into consideration. The SIMPLEC method with iterative processes is adopted to solve the governing equations. The effects of the random porosity model on the distributions of local Nusselt number are remarkable and the variations of the local Nusselt number become disordered. The contribution of latent heat transfer to the total heat transfer of the high Rayleigh number is larger than that of the low Rayleigh number and the variations of the latent heat transfer are not in order.

  14. Emulating the one-dimensional Fermi-Hubbard model by a double chain of qubits

    NASA Astrophysics Data System (ADS)

    Reiner, Jan-Michael; Marthaler, Michael; Braumüller, Jochen; Weides, Martin; Schön, Gerd

    2016-09-01

    The Jordan-Wigner transformation maps a one-dimensional (1D) spin-1 /2 system onto a fermionic model without spin degree of freedom. A double chain of quantum bits with X X and Z Z couplings of neighboring qubits along and between the chains, respectively, can be mapped on a spin-full 1D Fermi-Hubbard model. The qubit system can thus be used to emulate the quantum properties of this model. We analyze physical implementations of such analog quantum simulators, including one based on transmon qubits, where the Z Z interaction arises due to an inductive coupling and the X X interaction due to a capacitive interaction. We propose protocols to gain confidence in the results of the simulation through measurements of local operators.

  15. Impact of stray charge on interconnect wire via probability model of double-dot system

    NASA Astrophysics Data System (ADS)

    Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang

    2016-02-01

    The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).

  16. Frequency analysis of tick quotes on foreign currency markets and the double-threshold agent model

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2006-09-01

    Power spectrum densities for the number of tick quotes per minute (market activity) on three currency markets (USD/JPY, EUR/USD, and JPY/EUR) are analyzed for periods from January 2000 to December 2000. We find some peaks on the power spectrum densities at a few minutes. We develop the double-threshold agent model and confirm that the corresponding periodicity can be observed for the activity of this model even though market participants perceive common weaker periodic information than threshold for decision-making of them. This model is numerically performed and theoretically investigated by utilizing the mean-field approximation. We propose a hypothesis that the periodicities found on the power spectrum densities can be observed due to nonlinearity and diversity of market participants.

  17. DEVELOPMENT OF ANSYS FINITE ELEMENT MODELS FOR SINGLE SHELL TANK (SST) & DOUBLE SHELL TANK (DST) TANKS

    SciTech Connect

    JULYK, L.J.; MACKEY, T.C.

    2003-06-19

    Summary report of ANSYS finite element models developed for dome load analysis of Hanford 100-series single-shell tanks and double-shell tanks. Document provides user interface for selecting proper tank model and changing of analysis parameters for tank specific analysis. Current dome load restrictions for the Hanford Site underground waste storage tanks are based on existing analyses of record (AOR) that evaluated the tanks for a specific set of design load conditions. However, greater flexibility is required in controlling dome loadings applied to the tanks due to day-to-day operations and waste retrieval activities. This requires the development of an analytical model with sufficient detail to evaluate various dome loading conditions not specifically addressed in the AOR.

  18. A double hit model for the distribution of time to AIDS onset

    NASA Astrophysics Data System (ADS)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  19. Convolution/superposition using the Monte Carlo method.

    PubMed

    Naqvi, Shahid A; Earl, Matthew A; Shepard, David M

    2003-07-21

    The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 x 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 x 4 x 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose

  20. Convolution/superposition using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Naqvi, Shahid A.; Earl, Matthew A.; Shepard, David M.

    2003-07-01

    The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 × 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 × 4 × 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose

  1. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  2. Flow and particle deposition patterns in a realistic human double bifurcation airway model.

    PubMed

    Choi, L T; Tu, J Y; Li, H F; Thien, F

    2007-02-01

    Velocity profiles, local deposition efficiencies (DE), and deposition patterns of aerosol particles in the first three generations (i.e., double bifurcations) of an airway model have been simulated numerically, in which the airway model was constructed from computed tomography (CT) scan data of real human tracheobronchial airways. Three steady inhalation conditions, 15, 30, and 60 L/min, were simulated and a range of micrometer particle sizes (1-20 mum diameter) were injected into the model. Results were then compared with experimental and other numerical results which had employed either similar model geometry or test conditions. The effects of inhalation conditions on velocity profiles and particle deposition were studied. The data indicated that the local deposition efficiencies in the first bifurcation increased with a rise in the Stokes number (St) within St range from 0.0004 to 0.7. Within the same St range, DE in the second bifurcations (both left and right) was dropped dramatically after St increased to 0.17. Also, the second bifurcation in the right side (B2.1, closer to first bifurcation than left side, B2.2) was found to show a much higher (almost double) DE than the left side. This may be due to the fact that the left main bronchus is longer and has greater angulation than the right main bronchus. Generally, the present simulation using a computational fluid dynamic (CFD) technique obtained concurrent results with subtle differences compared to other works. However, due to omission of larynx in the model, which is known to significantly modify airflow and hence particle deposition, the present model may only serve as the "stepping stone" to simulating and analyzing dose-response or inhalation risk assessment visually for clinical researchers.

  3. Modeling and interpretation of Q logs in carbonate rock using a double porosity model and well logs

    NASA Astrophysics Data System (ADS)

    Parra, Jorge O.; Hackert, Chris L.

    2006-03-01

    Attenuation data extracted from full waveform sonic logs is sensitive to vuggy and matrix porosities in a carbonate aquifer. This is consistent with the synthetic attenuation (1 / Q) as a function of depth at the borehole-sonic source-peak frequency of 10 kHz. We use velocity and densities versus porosity relationships based on core and well log data to determine the matrix, secondary, and effective bulk moduli. The attenuation model requires the bulk modulus of the primary and secondary porosities. We use a double porosity model that allows us to investigate attenuation at the mesoscopic scale. Thus, the secondary and primary porosities in the aquifer should respond with different changes in fluid pressure. The results show a high permeability region with a Q that varies from 25 to 50 and correlates with the stiffer part of the carbonate formation. This pore structure permits water to flow between the interconnected vugs and the matrix. In this region the double porosity model predicts a decrease in the attenuation at lower frequencies that is associated with fluid flowing from the more compliant high-pressure regions (interconnected vug space) to the relatively stiff, low-pressure regions (matrix). The chalky limestone with a low Q of 17 is formed by a muddy porous matrix with soft pores. This low permeability region correlates with the low matrix bulk modulus. A low Q of 18 characterizes the soft sandy carbonate rock above the vuggy carbonate. This paper demonstrates the use of attenuation logs for discriminating between lithology and provides information on the pore structure when integrated with cores and other well logs. In addition, the paper demonstrates the practical application of a new double porosity model to interpret the attenuation at sonic frequencies by achieving a good match between measured and modeled attenuation.

  4. Double ITCZ in Coupled Ocean-Atmosphere Models: From CMIP3 to CMIP5

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao; Liu, Hailong; Zhang, Minghua

    2015-10-01

    Recent progress in reducing the double Intertropical Convergence Zone bias in coupled climate models is examined based on multimodel ensembles of historical climate simulations from Phase 3 and Phase 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5). Biases common to CMIP3 and CMIP5 models include spurious precipitation maximum in the southeastern Pacific, warmer sea surface temperature (SST), weaker easterly, and stronger meridional wind divergences away from the equator relative to observations. It is found that there is virtually no improvement in all these measures from the CMIP3 ensemble to the CMIP5 ensemble models. The five best models in the two ensembles as measured by the spatial correlations are also assessed. No progress can be identified in the subensembles of the five best models from CMIP3 to CMIP5 even though more models participated in CMIP5; the biases of excessive precipitation and overestimated SST in southeastern Pacific are even worse in the CMIP5 models.

  5. Dynamic Flow Modeling Using Double POD and ANN-ARX System Identification

    NASA Astrophysics Data System (ADS)

    Siegel, Stefan; Seidel, Jürgen; Cohen, Kelly; Aradag, Selin; McLaughlin, Thomas

    2007-11-01

    Double Proper Orthogonal Decomposition (DPOD), a modification of conventional POD, is a powerful tool for modeling of transient flow field spatial features, in particular, a 2D cylinder wake at a Reynolds number of 100. To develop a model for control design, the interaction of DPOD mode amplitudes with open-loop control inputs needs to be captured. Traditionally, Galerkin projection onto the Navier Stokes equations has been used for that purpose. Given the stability problems as well as issues in correctly modeling actuation input, we propose a different approach. We demonstrate that the ARX (Auto Regressive eXternal input) system identification method in connection with an Artificial Neural Network (ANN) nonlinear structure leads to a model that captures the dynamic behavior of the unforced and transient forced open loop data used for model development. Moreover, we also show that the model is valid at different Reynolds numbers, for different open loop forcing parameters, as well as for closed loop flow states with excellent accuracy. Thus, we present with this DPOD-ANN-ARX model a paradigm shift for laminar circular cylinder wake modeling that is proven valid for feedback flow controller development.

  6. Average Temperature Model of Double-Row-Pipe Frozen Soil Wall by Equivalent Trapezoid Method

    NASA Astrophysics Data System (ADS)

    Hu, Xiang-dong

    2010-05-01

    Average temperature is pre-requisite in obtaining the mechanical parameters and bearing capacity of frozen soil, and further evaluation of safety of the frozen soil wall could thus be made. This paper introduced Bakholdin's analytical solution for temperature field under double-row-pipe freezing and its correction when counting for the actual freezing temperature of soil. On the base of all these, an analytical model, namely equivalent trapezoid model was developed to calculate the average temperature of frozen soil wall under double-row-pipe freezing. This approach was to, on the base of Bakholdin formula and using equivalent trapezoid method, calculate the average temperature of a certain section which indicated the condition of the whole freezing soil wall. Furthermore, for possible parameter range of freezing tube layout might be applied in actual construction, this paper compared average temperatures of frozen soil wall obtained by the equivalent trapezoid method and by numerical integration of Bakholdin's analytical solution. The result showed that the discrepancy was small enough (<1.32%) to be ignored and the calculation accuracy of equivalent trapezoid method was competent for engineering practice.

  7. A dynamic double helical band as a model for cardiac pumping.

    PubMed

    Grosberg, Anna; Gharib, Morteza

    2009-06-01

    We address here, by means of finite-element computational modeling, two features of heart mechanics and, most importantly, their timing relationship: one of them is the ejected volume and the other is the twist of the heart. The corner stone of our approach is to take the double helical muscle fiber band as the dominant active macrostructure behind the pumping function. We show that this double helical model easily reproduces a physiological maximal ejection fraction of up to 60% without exceeding the limit on local muscle fiber contraction of 15%. Moreover, a physiological ejection fraction can be achieved independently of the excitation pattern. The left ventricular twist is also largely independent of the type of excitation. However, the physiological relationship between the ejection fraction and twist can only be reproduced with Purkinje-type excitation schemes. Our results indicate that the proper timing coordination between twist and ejection dynamics can be reproduced only if the excitation front originates in the septum region near the apex. This shows that the timing of the excitation is directly related to the productive pumping operation of the heart and illustrates the direction for possible bioinspired pump design.

  8. Nonresonant Double Hopf Bifurcation in Toxic Phytoplankton-Zooplankton Model with Delay

    NASA Astrophysics Data System (ADS)

    Yuan, Rui; Jiang, Weihua; Wang, Yong

    This paper investigates a toxic phytoplankton-zooplankton model with Michaelis-Menten type phytoplankton harvesting. The model has rich dynamical behaviors. It undergoes transcritical, saddle-node, fold, Hopf, fold-Hopf and double Hopf bifurcation, when the parameters change and go through some of the critical values, the dynamical properties of the system will change also, such as the stability, equilibrium points and the periodic orbit. We first study the stability of the equilibria, and analyze the critical conditions for the above bifurcations at each equilibrium. In addition, the stability and direction of local Hopf bifurcations, and the completion bifurcation set by calculating the universal unfoldings near the double Hopf bifurcation point are given by the normal form theory and center manifold theorem. We obtained that the stable coexistent equilibrium point and stable periodic orbit alternate regularly when the digestion time delay is within some finite value. That is, we derived the pattern for the occurrence, and disappearance of a stable periodic orbit. Furthermore, we calculated the approximation expression of the critical bifurcation curve using the digestion time delay and the harvesting rate as parameters, and determined a large range in terms of the harvesting rate for the phytoplankton and zooplankton to coexist in a long term.

  9. Mathematical modeling of macrosegregation of iron carbon binary alloy: Role of double diffusive convection

    SciTech Connect

    Singh, A.K.; Basu, B.

    1995-10-01

    During alloy solidification, macrosegregation results from long range transport of solute under the influence of convective flow and leads to nonuniform quality of a solidified material. The present study is an attempt to understand the role of double diffusive convection resulting from the solutal rejection in the evolution of macrosegregation in an iron carbon system. The solidification process of an alloy is governed by conservation of heat, mass, momentum, and species and is accompanies by the evolution of latent heat and the rejection or incorporation of solute at the solid liquid interface. Using a continuum formulation, the governing equations were solved using the finite volume method. The numerical model was validated by simulating experiments on an ammonium chloride water system reported in the literature. The model was further used to study the role of double diffusive convection in the evolution of macrosegregation during solidification of Fe 1 wt pct c alloy in a rectangular cavity. Simulation of this transient process was carried out until complete solidification, and the results, depicting the influence of flow field on thermal and solutal field and vice versa, are shown at various stages of solidification. Under the given set of parameters, it was found that the thermal buoyancy affects the macrosegregation field globally, whereas the solutal buoyancy has a localized effect.

  10. Model of a double-sided surface plasmon resonance fiber-optic sensor

    NASA Astrophysics Data System (ADS)

    Ciprian, Dalibor; Hlubina, Petr

    2014-12-01

    A model of a surface plasmon resonance fiber-optic sensor with a double-sided metallic layer is presented. Most of such fiber optic sensing configurations are based on a symmetric circular metal layer deposited on a bare fiber core used for excitation of surface plasmon waves. To deposit a homogeneous layer, the fiber sample has to be continually rotated during deposition process, so the deposition chamber has to be equipped with an appropriate positioning device. This difficulty can be avoided when the layer is deposited in two steps without the rotation during the deposition (double-sided deposition). The technique is simpler, but in this case, the layer is not at and a radial thickness gradient is imposed. Consequently, the sensor starts to be sensitive to polarization of excitation light beam. A theoretical model is used to explain the polarization properties of such a sensing configuration. The analysis is carried out in the frame of optics of layered media. Because the multimode optical fiber with large core diameter is assumed, the eccentricity of the outer metal layer boundary imposed by the thickness gradient is low and the contribution of skew rays in the layer is neglected. The effect of the layer thickness gradient on the performance of the sensor is studied using numerical simulations.

  11. Modeling avian detection probabilities as a function of habitat using double-observer point count data

    USGS Publications Warehouse

    Heglund, P.J.; Nichols, J.D.; Hines, J.E.; Sauer, J.; Fallon, J.; Fallon, F.; Field, Rebecca; Warren, Robert J.; Okarma, Henryk; Sievert, Paul R.

    2001-01-01

    Point counts are a controversial sampling method for bird populations because the counts are not censuses, and the proportion of birds missed during counting generally is not estimated. We applied a double-observer approach to estimate detection rates of birds from point counts in Maryland, USA, and test whether detection rates differed between point counts conducted in field habitats as opposed to wooded habitats. We conducted 2 analyses. The first analysis was based on 4 clusters of counts (routes) surveyed by a single pair of observers. A series of models was developed with differing assumptions about sources of variation in detection probabilities and fit using program SURVIV. The most appropriate model was selected using Akaike's Information Criterion. The second analysis was based on 13 routes (7 woods and 6 field routes) surveyed by various observers in which average detection rates were estimated by route and compared using a t-test. In both analyses, little evidence existed for variation in detection probabilities in relation to habitat. Double-observer methods provide a reasonable means of estimating detection probabilities and testing critical assumptions needed for analysis of point counts.

  12. Dynamic Characteristics of Mechanical Ventilation System of Double Lungs with Bi-Level Positive Airway Pressure Model

    PubMed Central

    Shen, Dongkai; Zhang, Qian

    2016-01-01

    In recent studies on the dynamic characteristics of ventilation system, it was considered that human had only one lung, and the coupling effect of double lungs on the air flow can not be illustrated, which has been in regard to be vital to life support of patients. In this article, to illustrate coupling effect of double lungs on flow dynamics of mechanical ventilation system, a mathematical model of a mechanical ventilation system, which consists of double lungs and a bi-level positive airway pressure (BIPAP) controlled ventilator, was proposed. To verify the mathematical model, a prototype of BIPAP system with a double-lung simulators and a BIPAP ventilator was set up for experimental study. Lastly, the study on the influences of key parameters of BIPAP system on dynamic characteristics was carried out. The study can be referred to in the development of research on BIPAP ventilation treatment and real respiratory diagnostics. PMID:27660646

  13. Single-trial EEG RSVP classification using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William

    2016-05-01

    Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.

  14. A three-dimensional statistical mechanical model of folding double-stranded chain molecules

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbing; Chen, Shi-Jie

    2001-05-01

    Based on a graphical representation of intrachain contacts, we have developed a new three-dimensional model for the statistical mechanics of double-stranded chain molecules. The theory has been tested and validated for the cubic lattice chain conformations. The statistical mechanical model can be applied to the equilibrium folding thermodynamics of a large class of chain molecules, including protein β-hairpin conformations and RNA secondary structures. The application of a previously developed two-dimensional model to RNA secondary structure folding thermodynamics generally overestimates the breadth of the melting curves [S-J. Chen and K. A. Dill, Proc. Natl. Acad. Sci. U.S.A. 97, 646 (2000)], suggesting an underestimation for the sharpness of the conformational transitions. In this work, we show that the new three-dimensional model gives much sharper melting curves than the two-dimensional model. We believe that the new three-dimensional model may give much improved predictions for the thermodynamic properties of RNA conformational changes than the previous two-dimensional model.

  15. Verilog-A implementation of a double-gate junctionless compact model for DC circuit simulations

    NASA Astrophysics Data System (ADS)

    Alvarado, J.; Flores, P.; Romero, S.; Ávila-Herrera, F.; González, V.; Soto-Cruz, B. S.; Cerdeira, A.

    2016-07-01

    A physically based model of the double-gate juntionless transistor which is capable of describing accumulation and depletion regions is implemented in Verilog-A in order to perform DC circuit simulations. Analytical description of the difference of potentials between the center and the surface of the silicon layer allows the determination of the mobile charges. Furthermore, mobility degradation, series resistance, as well as threshold voltage roll-off, drain saturation voltage, channel shortening and velocity saturation are also considered. In order to provide this model to all of the community, the implementation of this model is performed in Ngspice, which is a free circuit simulation with an ADMS interface to integrate Verilog-A models. Validation of the model implementation is done through 2D numerical simulations of transistors with 1 μ {{m}} and 40 {{nm}} silicon channel length and 1 × 1019 or 5× {10}18 {{{cm}}}-3 doping concentration of the silicon layer with 10 and 15 {{nm}} silicon thickness. Good agreement between the numerical simulated behavior and model implementation is obtained, where only eight model parameters are used.

  16. An enhanced lumped element electrical model of a double barrier memristive device

    NASA Astrophysics Data System (ADS)

    Solan, Enver; Dirkmann, Sven; Hansen, Mirko; Schroeder, Dietmar; Kohlstedt, Hermann; Ziegler, Martin; Mussenbrock, Thomas; Ochs, Karlheinz

    2017-05-01

    The massive parallel approach of neuromorphic circuits leads to effective methods for solving complex problems. It has turned out that resistive switching devices with a continuous resistance range are potential candidates for such applications. These devices are memristive systems—nonlinear resistors with memory. They are fabricated in nanotechnology and hence parameter spread during fabrication may aggravate reproducible analyses. This issue makes simulation models of memristive devices worthwhile. Kinetic Monte-Carlo simulations based on a distributed model of the device can be used to understand the underlying physical and chemical phenomena. However, such simulations are very time-consuming and neither convenient for investigations of whole circuits nor for real-time applications, e.g. emulation purposes. Instead, a concentrated model of the device can be used for both fast simulations and real-time applications, respectively. We introduce an enhanced electrical model of a valence change mechanism (VCM) based double barrier memristive device (DBMD) with a continuous resistance range. This device consists of an ultra-thin memristive layer sandwiched between a tunnel barrier and a Schottky-contact. The introduced model leads to very fast simulations by using usual circuit simulation tools while maintaining physically meaningful parameters. Kinetic Monte-Carlo simulations based on a distributed model and experimental data have been utilized as references to verify the concentrated model.

  17. A double-layer based model of ion confinement in electron cyclotron resonance ion source

    SciTech Connect

    Mascali, D. Neri, L.; Celona, L.; Castro, G.; Gammino, S.; Ciavola, G.; Torrisi, G.; Sorbello, G.

    2014-02-15

    The paper proposes a new model of ion confinement in ECRIS, which can be easily generalized to any magnetic configuration characterized by closed magnetic surfaces. Traditionally, ion confinement in B-min configurations is ascribed to a negative potential dip due to superhot electrons, adiabatically confined by the magneto-static field. However, kinetic simulations including RF heating affected by cavity modes structures indicate that high energy electrons populate just a thin slab overlapping the ECR layer, while their density drops down of more than one order of magnitude outside. Ions, instead, diffuse across the electron layer due to their high collisionality. This is the proper physical condition to establish a double-layer (DL) configuration which self-consistently originates a potential barrier; this “barrier” confines the ions inside the plasma core surrounded by the ECR surface. The paper will describe a simplified ion confinement model based on plasma density non-homogeneity and DL formation.

  18. The dynamics of double slab subduction from numerical and semi-analytic models

    NASA Astrophysics Data System (ADS)

    Holt, A.; Royden, L.; Becker, T. W.

    2015-12-01

    Regional interactions between multiple subducting slabs have been proposed to explain enigmatic slab kinematics in a number of subduction zones, a pertinent example being the rapid pre-collisional plate convergence of India and Eurasia. However, dynamically consistent 3-D numerical models of double subduction have yet to be explored, and so the physics of such double slab systems remain poorly understood. Here we build on the comparison of a fully numerical finite element model (CitcomCU) and a time-dependent semi-analytic subduction models (FAST) presented for single subduction systems (Royden et. al., 2015 AGU Fall Abstract) to explore how subducting slab kinematics, particularly trench and plate motions, can be affected by the presence of an additional slab, with all of the possible slab dip direction permutations. A second subducting slab gives rise to a more complex dynamic pressure and mantle flow fields, and an additional slab pull force that is transmitted across the subduction zone interface. While the general relationships among plate velocity, trench velocity, asthenospheric pressure drop, and plate coupling modes are similar to those observed for the single slab case, we find that multiple subducting slabs can interact with each other and indeed induce slab kinematics that deviate significantly from those observed for the equivalent single slab models. References Jagoutz, O., Royden, L. H., Holt, A. F. & Becker, T. W., 2015, Nature Geo., 8, 10.1038/NGEO2418. Moresi, L. N. & Gurnis, M., 1996, Earth Planet. Sci. Lett., 138, 15-28. Royden, L. H. & Husson, L., 2006, Geophys. J. Int. 167, 881-905. Zhong, S., 2006, J. Geophys. Res., 111, doi: 10.1029/2005JB003972.

  19. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  20. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  1. A predictive model to inform adaptive management of double-crested cormorants and fisheries in Michigan

    USGS Publications Warehouse

    Tsehaye, Iyob; Jones, Michael L.; Irwin, Brian J.; Fielder, David G.; Breck, James E.; Luukkonen, David R.

    2015-01-01

    The proliferation of double-crested cormorants (DCCOs; Phalacrocorax auritus) in North America has raised concerns over their potential negative impacts on game, cultured and forage fishes, island and terrestrial resources, and other colonial water birds, leading to increased public demands to reduce their abundance. By combining fish surplus production and bird functional feeding response models, we developed a deterministic predictive model representing bird–fish interactions to inform an adaptive management process for the control of DCCOs in multiple colonies in Michigan. Comparisons of model predictions with observations of changes in DCCO numbers under management measures implemented from 2004 to 2012 suggested that our relatively simple model was able to accurately reconstruct past DCCO population dynamics. These comparisons helped discriminate among alternative parameterizations of demographic processes that were poorly known, especially site fidelity. Using sensitivity analysis, we also identified remaining critical uncertainties (mainly in the spatial distributions of fish vs. DCCO feeding areas) that can be used to prioritize future research and monitoring needs. Model forecasts suggested that continuation of existing control efforts would be sufficient to achieve long-term DCCO control targets in Michigan and that DCCO control may be necessary to achieve management goals for some DCCO-impacted fisheries in the state. Finally, our model can be extended by accounting for parametric or ecological uncertainty and including more complex assumptions on DCCO–fish interactions as part of the adaptive management process.

  2. Study of the pure double folding optical model for 100 MeV/u deuteron scattering

    NASA Astrophysics Data System (ADS)

    Howard, Kevin; Patel, Darshana; Garg, Umesh

    2014-09-01

    The centroid energies of the giant monopole resonance (GMR) in nuclei are important because they are directly related to the nuclear incompressibility, an important quantity in the nuclear equation of state. It is necessary to examine the properties of the GMR in nuclei far from stability using advanced experimental techniques. The optical model for deuteron scattering is important from the point of view of performing these studies in inverse kinematics. Most studies on deuteron optical potentials have been done at lower energies and using the phenomenological optical model. However this model has been shown to overestimate the cross-sections for the low-lying discrete state. Recent developments in theory allow for the optical model real and imaginary volume potentials to be calculated using a double folding model with the help of the computer code dfpd5. For the first time these calculations are used to model the elastic and inelastic angular distributions in 28Si, 58Ni, and 116Sn nuclei. The experiment was performed at the Research Center for Nuclear Physics, Osaka University, Japan, using a 100 MeV/u deuteron beam. Results of the analysis will be presented.

  3. Double-stranded DNA organization in bacteriophage heads: an alternative toroid-based model.

    PubMed Central

    Hud, N V

    1995-01-01

    Studies of the organization of double-stranded DNA within bacteriophage heads during the past four decades have produced a wealth of data. However, despite the presentation of numerous models, the true organization of DNA within phage heads remains unresolved. The observations of toroidal DNA structures in electron micrographs of phage lysates have long been cited as support for the organization of DNA in a spool-like fashion. This particular model, like all other models, has not been found to be consistent will all available data. Recently we proposed that DNA within toroidal condensates produced in vitro is organized in a manner significantly different from that suggested by the spool model. This new toroid model has allowed the development of an alternative model for DNA organization within bacteriophage heads that is consistent with a wide range of biophysical data. Here we propose that bacteriophage DNA is packaged in a toroid that is folded into a highly compact structure. Images FIGURE 1 FIGURE 2 FIGURE 3 FIGURE 4 PMID:8534805

  4. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.

  5. Hardy's inequalities for the twisted convolution with Laguerre functions.

    PubMed

    Xiao, Jinsen; He, Jianxun

    2017-01-01

    In this article, two types of Hardy's inequalities for the twisted convolution with Laguerre functions are studied. The proofs are mainly based on an estimate for the Heisenberg left-invariant vectors of the special Hermite functions deduced by the Heisenberg group approach.

  6. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  7. An Interactive Graphics Program for Assistance in Learning Convolution.

    ERIC Educational Resources Information Center

    Frederick, Dean K.; Waag, Gary L.

    1980-01-01

    A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…

  8. Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification

    NASA Astrophysics Data System (ADS)

    Salamon, Justin; Bello, Juan Pablo

    2017-03-01

    The ability of deep convolutional neural networks (CNN) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep convolutional neural network architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a "shallow" dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.

  9. Double Point Source W-phase Inversion: Real-time Implementation and Automated Model Selection

    NASA Astrophysics Data System (ADS)

    Nealy, J. L.; Hayes, G. P.

    2015-12-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever-evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquakes. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match previously published analyses of the same events.

  10. Double point source W-phase inversion: Real-time implementation and automated model selection

    NASA Astrophysics Data System (ADS)

    Nealy, Jennifer L.; Hayes, Gavin P.

    2015-12-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  11. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  12. Neutrinoless Double Beta Decay and Lepton Flavour Violation in Broken μ - τ Symmetric Neutrino Mass Models

    NASA Astrophysics Data System (ADS)

    Borgohain, Happy; Das, Mrinal Kumar

    2017-09-01

    We have studied neutrinoless double beta decay and charged lepton flavour violation in broken μ - τ symmetric neutrino masses in a generic left-right symmetric model (LRSM). The leading order μ - τ symmetric mass matrix originates from the type I (II) seesaw mechanism, whereas the perturbations to μ - τ symmetry in order for generation of non-zero reactor mixing angle 𝜃 13, as required by latest neutrino oscillation data, originates from the type II (I) seesaw mechanism. In our work, we considered four different realizations of μ - τ symmetry, viz. Tribimaximal Mixing (TBM), Bimaximal Mixing (BM), Hexagonal Mixing (HM) and Golden Ratio Mixing (GRM). We then studied the new physics contributions to neutrinoless double beta decay (NDBD) ignoring the left-right gauge boson mixing and the heavy-light neutrino mixing within the framework of LRSM. We have considered the mass of the gauge bosons and scalars to be around TeV and studied the effects of the new physics contributions on the effective mass and the NDBD half life and compared with the current experimental limit imposed by KamLAND-Zen. We further extended our analysis by correlating the lepton flavour violation of the decay processes, ( μ → 3 e) and ( μ → e γ) with the lightest neutrino mass and atmospheric mixing angle 𝜃 23 respectively.

  13. Quantitative cellular uptake of double fluorescent core-shelled model submicronic particles

    NASA Astrophysics Data System (ADS)

    Leclerc, Lara; Boudard, Delphine; Pourchez, Jérémie; Forest, Valérie; Marmuse, Laurence; Louis, Cédric; Bin, Valérie; Palle, Sabine; Grosseau, Philippe; Bernache-Assollant, Didier; Cottier, Michèle

    2012-11-01

    The relationship between particles' physicochemical parameters, their uptake by cells and their degree of biological toxicity represent a crucial issue, especially for the development of new technologies such as fabrication of micro- and nanoparticles in the promising field of drug delivery systems. This work was aimed at developing a proof-of-concept for a novel model of double fluorescence submicronic particles that could be spotted inside phagolysosomes. Fluorescein isothiocyanate (FITC) particles were synthesized and then conjugated with a fluorescent pHrodo™ probe, red fluorescence of which increases in acidic conditions such as within lysosomes. After validation in acellular conditions by spectral analysis with confocal microscopy and dynamic light scattering, quantification of phagocytosis was conducted on a macrophage cell line in vitro. The biological impact of pHrodo functionalization (cytotoxicity, inflammatory response, and oxidative stress) was also investigated. Results validate the proof-of-concept of double fluorescent particles (FITC + pHrodo), allowing detection of entirely engulfed pHrodo particles (green and red labeling). Moreover incorporation of pHrodo had no major effects on cytotoxicity compared to particles without pHrodo, making them a powerful tool for micro- and nanotechnologies.

  14. Neutrinoless Double Beta Decay and Lepton Flavour Violation in Broken μ - τ Symmetric Neutrino Mass Models

    NASA Astrophysics Data System (ADS)

    Borgohain, Happy; Das, Mrinal Kumar

    2017-06-01

    We have studied neutrinoless double beta decay and charged lepton flavour violation in broken μ - τ symmetric neutrino masses in a generic left-right symmetric model (LRSM). The leading order μ - τ symmetric mass matrix originates from the type I (II) seesaw mechanism, whereas the perturbations to μ - τ symmetry in order for generation of non-zero reactor mixing angle 𝜃 13, as required by latest neutrino oscillation data, originates from the type II (I) seesaw mechanism. In our work, we considered four different realizations of μ - τ symmetry, viz. Tribimaximal Mixing (TBM), Bimaximal Mixing (BM), Hexagonal Mixing (HM) and Golden Ratio Mixing (GRM). We then studied the new physics contributions to neutrinoless double beta decay (NDBD) ignoring the left-right gauge boson mixing and the heavy-light neutrino mixing within the framework of LRSM. We have considered the mass of the gauge bosons and scalars to be around TeV and studied the effects of the new physics contributions on the effective mass and the NDBD half life and compared with the current experimental limit imposed by KamLAND-Zen. We further extended our analysis by correlating the lepton flavour violation of the decay processes, (μ → 3e) and (μ → e γ) with the lightest neutrino mass and atmospheric mixing angle 𝜃 23 respectively.

  15. Use of two-dimensional transmission photoelastic models to study stresses in double-lap bolted joints

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D. H.

    1981-01-01

    The stress distribution in two hole connectors in a double lap joint configuration was studied. The following steps are described: (1) fabrication of photoelastic models of double lap double hole joints designed to determine the stresses in the inner lap; (2) assessment of the effects of joint geometry on the stresses in the inner lap; and (3) quantification of differences in the stresses near the two holes. The two holes were on the centerline of the joint and the joints were loaded in tension, parallel to the centerline. Acrylic slip fit pins through the holes served as fasteners. Two dimensional transmission photoelastic models were fabricated by using transparent acrylic outer laps and a photoelastic model material for the inner laps. It is concluded that the photoelastic fringe patterns which are visible when the models are loaded are due almost entirely to stresses in the inner lap.

  16. Modeling of Single Event Transients With Dual Double-Exponential Current Sources: Implications for Logic Cell Characterization

    NASA Astrophysics Data System (ADS)

    Black, Dolores A.; Robinson, William H.; Wilcox, Ian Z.; Limbrick, Daniel B.; Black, Jeffrey D.

    2015-08-01

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. An accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. A small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. The parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.

  17. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.

  18. Text-Attentional Convolutional Neural Networks for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-03-28

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  19. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-05-01

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.

  20. Brain Tumor Segmentation using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-03-04

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic Resonance Imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 33 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0:88, 0:83, 0:77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0:78, 0:65, and 0:75 for the complete, core, and enhancing regions, respectively.