Science.gov

Sample records for double convolution model

  1. A model of traffic signs recognition with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  2. Surrogacy theory and models of convoluted organic systems.

    PubMed

    Konopka, Andrzej K

    2007-03-01

    The theory of surrogacy is briefly outlined as one of the conceptual foundations of systems biology that has been developed for the last 30 years in the context of Hertz-Rosen modeling relationship. Conceptual foundations of modeling convoluted (biologically complex) systems are briefly reviewed and discussed in terms of current and future research in systems biology. New as well as older results that pertain to the concepts of modeling relationship, sequence of surrogacies, cascade of representations, complementarity, analogy, metaphor, and epistemic time are presented together with a classification of models in a cascade. Examples of anticipated future applications of surrogacy theory in life sciences are briefly discussed.

  3. A fast double template convolution isocenter evaluation algorithm with subpixel accuracy

    SciTech Connect

    Winey, Brian; Sharp, Greg; Bussiere, Marc

    2011-01-15

    Purpose: To design a fast Winston Lutz (fWL) algorithm for accurate analysis of radiation isocenter from images without edge detection or center of mass calculations. Methods: An algorithm has been developed to implement the Winston Lutz test for mechanical/radiation isocenter agreement using an electronic portal imaging device (EPID). The algorithm detects the position of the radiation shadow of a tungsten ball within a stereotactic cone. The fWL algorithm employs a double convolution to independently find the position of the sphere and cone centers. Subpixel estimation is used to achieve high accuracy. Results of the algorithm were compared to (1) a human observer with template guidance and (2) an edge detection/center of mass (edCOM) algorithm. Testing was performed with high resolution (0.05mm/px, film) and low resolution (0.78mm/px, EPID) image sets. Results: Sphere and cone center relative positions were calculated with the fWL algorithm for high resolution test images with an accuracy of 0.002{+-}0.061 mm compared to 0.042{+-}0.294 mm for the human observer, and 0.003{+-}0.038 mm for the edCOM algorithm. The fWL algorithm required 0.01 s per image compared to 5 s for the edCOM algorithm and 20 s for the human observer. For lower resolution images the fWL algorithm localized the centers with an accuracy of 0.083{+-}0.12 mm compared to 0.03{+-}0.5514 mm for the edCOM algorithm. Conclusions: A fast (subsecond) subpixel algorithm has been developed that can accurately determine the center locations of the ball and cone in Winston Lutz test images without edge detection or COM calculations.

  4. Forecasting natural aquifer discharge using a numerical model and convolution.

    PubMed

    Boggs, Kevin G; Johnson, Gary S; Van Kirk, Rob; Fairley, Jerry P

    2014-01-01

    If the nature of groundwater sources and sinks can be determined or predicted, the data can be used to forecast natural aquifer discharge. We present a procedure to forecast the relative contribution of individual aquifer sources and sinks to natural aquifer discharge. Using these individual aquifer recharge components, along with observed aquifer heads for each January, we generate a 1-year, monthly spring discharge forecast for the upcoming year with an existing numerical model and convolution. The results indicate that a forecast of natural aquifer discharge can be developed using only the dominant aquifer recharge sources combined with the effects of aquifer heads (initial conditions) at the time the forecast is generated. We also estimate how our forecast will perform in the future using a jackknife procedure, which indicates that the future performance of the forecast is good (Nash-Sutcliffe efficiency of 0.81). We develop a forecast and demonstrate important features of the procedure by presenting an application to the Eastern Snake Plain Aquifer in southern Idaho.

  5. Designing the optimal convolution kernel for modeling the motion blur

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2011-06-01

    Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.

  6. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  7. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  8. The Brain's Representations May Be Compatible With Convolution-Based Memory Models.

    PubMed

    Kato, Kenichi; Caplan, Jeremy B

    2017-02-13

    Convolution is a mathematical operation used in vector-models of memory that have been successful in explaining a broad range of behaviour, including memory for associations between pairs of items, an important primitive of memory upon which a broad range of everyday memory behaviour depends. However, convolution models have trouble with naturalistic item representations, which are highly auto-correlated (as one finds, e.g., with photographs), and this has cast doubt on their neural plausibility. Consequently, modellers working with convolution have used item representations composed of randomly drawn values, but introducing so-called noise-like representation raises the question how those random-like values might relate to actual item properties. We propose that a compromise solution to this problem may already exist. It has also long been known that the brain tends to reduce auto-correlations in its inputs. For example, centre-surround cells in the retina approximate a Difference-of-Gaussians (DoG) transform. This enhances edges, but also turns natural images into images that are closer to being statistically like white noise. We show the DoG-transformed images, although not optimal compared to noise-like representations, survive the convolution model better than naturalistic images. This is a proof-of-principle that the pervasive tendency of the brain to reduce auto-correlations may result in representations of information that are already adequately compatible with convolution, supporting the neural plausibility of convolution-based association-memory. (PsycINFO Database Record

  9. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    PubMed

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration.

  10. Transient electromagnetic modeling of the ZR accelerator water convolute and stack.

    SciTech Connect

    Lehr, Jane Marie; Elizondo-Decanini, Juan Manuel; Turner, C. David; Coats, Rebecca Sue; Bohnhoff, William J.; Pointon, Timothy David; Pasik, Michael Francis; Johnson, William Arthur; Savage, Mark Edward

    2005-06-01

    The ZR accelerator is a refurbishment of Sandia National Laboratories Z accelerator [1]. The ZR accelerator components were designed using electrostatic and circuit modeling tools. Transient electromagnetic modeling has played a complementary role in the analysis of ZR components [2]. In this paper we describe a 3D transient electromagnetic analysis of the ZR water convolute and stack using edge-based finite element techniques.

  11. Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)

    NASA Astrophysics Data System (ADS)

    Long, A. J.

    2009-12-01

    Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.

  12. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    NASA Astrophysics Data System (ADS)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-07-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  13. The Gaussian streaming model and convolution Lagrangian effective field theory

    NASA Astrophysics Data System (ADS)

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin

    2016-12-01

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.

  14. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  15. Models For Diffracting Aperture Identification : A Comparison Between Ideal And Convolutional Observations

    NASA Astrophysics Data System (ADS)

    Crosta, Giovanni

    1983-09-01

    We consider a number of inverse diffraction problems where different models are compared. Ideal measurements yield Cauchy data , to which corresponds a unique solution. If a convolutional observation map is chosen, uniqueness can no longer be insured. We also briefly examine a non-linear non-invertible observation map , which describes a quadratic detector. In all of these cases we discuss the link between aperture identification and optimal control theory , which leads to regularised functional minimisation. This task can be performed by a discrete gradient algorithm of which we give the flow chart.

  16. Revision of the theory of tracer transport and the convolution model of dynamic contrast enhanced magnetic resonance imaging

    PubMed Central

    Bammer, Roland; Stollberger, Rudolf

    2012-01-01

    Counterexamples are used to motivate the revision of the established theory of tracer transport. Then dynamic contrast enhanced magnetic resonance imaging in particular is conceptualized in terms of a fully distributed convection–diffusion model from which a widely used convolution model is derived using, alternatively, compartmental discretizations or semigroup theory. On this basis, applications and limitations of the convolution model are identified. For instance, it is proved that perfusion and tissue exchange states cannot be identified on the basis of a single convolution equation alone. Yet under certain assumptions, particularly that flux is purely convective at the boundary of a tissue region, physiological parameters such as mean transit time, effective volume fraction, and volumetric flow rate per unit tissue volume can be deduced from the kernel. PMID:17429633

  17. A convolutional code-based sequence analysis model and its application.

    PubMed

    Liu, Xiao; Geng, Xiaoli

    2013-04-16

    A new approach for encoding DNA sequences as input for DNA sequence analysis is proposed using the error correction coding theory of communication engineering. The encoder was designed as a convolutional code model whose generator matrix is designed based on the degeneracy of codons, with a codon treated in the model as an informational unit. The utility of the proposed model was demonstrated through the analysis of twelve prokaryote and nine eukaryote DNA sequences having different GC contents. Distinct differences in code distances were observed near the initiation and termination sites in the open reading frame, which provided a well-regulated characterization of the DNA sequences. Clearly distinguished period-3 features appeared in the coding regions, and the characteristic average code distances of the analyzed sequences were approximately proportional to their GC contents, particularly in the selected prokaryotic organisms, presenting the potential utility as an added taxonomic characteristic for use in studying the relationships of living organisms.

  18. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    SciTech Connect

    Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd

    2011-01-15

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3

  19. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  20. Dose convolution filter: Incorporating spatial dose information into tissue response modeling

    SciTech Connect

    Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay

    2010-03-15

    Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.

  1. The Luminous Convolution Model-The light side of dark matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna

    2014-03-01

    We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.

  2. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network

    PubMed Central

    Li, Na; Yang, Yongjia

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly. PMID:27803711

  3. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    PubMed Central

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  4. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    NASA Astrophysics Data System (ADS)

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-09-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion.

  5. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    NASA Astrophysics Data System (ADS)

    Starn, J. J.

    2013-12-01

    Particle tracking often is used to generate particle-age distributions that are used as impulse-response functions in convolution. A typical application is to produce groundwater solute breakthrough curves (BTC) at endpoint receptors such as pumping wells or streams. The commonly used semi-analytical particle-tracking algorithm based on the assumption of linear velocity gradients between opposing cell faces is computationally very fast when used in combination with finite-difference models. However, large gradients near pumping wells in regional-scale groundwater-flow models often are not well represented because of cell-size limitations. This leads to inaccurate velocity fields, especially at weak sinks. Accurate analytical solutions for velocity near a pumping well are available, and various boundary conditions can be imposed using image-well theory. Python can be used to embed these solutions into existing semi-analytical particle-tracking codes, thereby maintaining the integrity and quality-assurance of the existing code. Python (and associated scientific computational packages NumPy, SciPy, and Matplotlib) is an effective tool because of its wide ranging capability. Python text processing allows complex and database-like manipulation of model input and output files, including binary and HDF5 files. High-level functions in the language include ODE solvers to solve first-order particle-location ODEs, Gaussian kernel density estimation to compute smooth particle-age distributions, and convolution. The highly vectorized nature of NumPy arrays and functions minimizes the need for computationally expensive loops. A modular Python code base has been developed to compute BTCs using embedded analytical solutions at pumping wells based on an existing well-documented finite-difference groundwater-flow simulation code (MODFLOW) and a semi-analytical particle-tracking code (MODPATH). The Python code base is tested by comparing BTCs with highly discretized synthetic steady

  6. Age-distribution estimation for karst groundwater: Issues of parameterization and complexity in inverse modeling by convolution

    NASA Astrophysics Data System (ADS)

    Long, Andrew J.; Putnam, Larry D.

    2009-10-01

    SummaryConvolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium ( 3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.

  7. Age-distribution estimation for karst groundwater: Issues of parameterization and complexity in inverse modeling by convolution

    USGS Publications Warehouse

    Long, A.J.; Putnam, L.D.

    2009-01-01

    Convolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium (3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.

  8. Experimental validation of a convolution- based ultrasound image formation model using a planar arrangement of micrometer-scale scatterers.

    PubMed

    Gyöngy, Miklós; Makra, Ákos

    2015-06-01

    The shift-invariant convolution model of ultrasound is widely used in the literature, for instance to generate fast simulations of ultrasound images. However, comparison of the resulting simulations with experiments is either qualitative or based on aggregate descriptors such as envelope statistics or spectral components. In the current work, a planar arrangement of 49-μm polystyrene microspheres was imaged using macrophotography and a 4.7-MHz ultrasound linear array. The macrophotograph allowed estimation of the scattering function (SF) necessary for simulations. Using the coefficient of determination R(2) between real and simulated ultrasound images, different estimates of the SF and point spread function (PSF) were tested. All estimates of the SF performed similarly, whereas the best estimate of the PSF was obtained by Hanningwindowing the deconvolution of the real ultrasound image with the SF: this yielded R(2) = 0.43 for the raw simulated image and R(2) = 0.65 for the envelope-detected ultrasound image. R(2) was highly dependent on microsphere concentration, with values of up to 0.99 for regions with scatterers. The results validate the use of the shift-invariant convolution model for the realistic simulation of ultrasound images. However, care needs to be taken in experiments to reduce the relative effects of other sources of scattering such as from multiple reflections, either by increasing the concentration of imaged scatterers or by more careful experimental design.

  9. Harmonic domain modelling of three phase thyristor-controlled reactors by means of switching vectors and discrete convolutions

    SciTech Connect

    Rico, J.J.; Acha, E.; Miller, T.J.E.

    1996-07-01

    The main objective of this paper is to report on a newly developed three phase Thyristor Controlled Reactor (TCR) model which is based on the use of harmonic switching vectors and discrete convolutions. This model is amenable to direct frequency domain operations and provides a fast and reliable means for assessing 6- and 12-pulse TCR plant performance at harmonic frequencies. The use of alternate time domain and frequency domain representations is avoided as well as the use of FFTs. In this approach, each single phase unit of the TCR is modelled as a voltage-dependent harmonic Norton equivalent where all the harmonics and cross-couplings between harmonics are explicitly shown. This model is suitable for direct incorporation into the harmonic domain frame of reference where all the busbars, phases, harmonics and cross-couplings between harmonics are combined together for a unified iterative solution through a Newton-Raphson technique exhibiting quadratic convergence.

  10. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  11. Real-time hybrid simulation of a complex bridge model with MR dampers using the convolution integral method

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaoshuo; Jig Kim, Sung; Plude, Shelley; Christenson, Richard

    2013-10-01

    Magneto-rheological (MR) fluid dampers can be used to reduce the traffic induced vibration in highway bridges and protect critical structural components from fatigue. Experimental verification is needed to verify the applicability of the MR dampers for this purpose. Real-time hybrid simulation (RTHS), where the MR dampers are physically tested and dynamically linked to a numerical model of the highway bridge and truck traffic, provides an efficient and effective means to experimentally examine the efficacy of MR dampers for fatigue protection of highway bridges. In this paper a complex highway bridge model with 263 178 degrees-of-freedom under truck loading is tested using the proposed convolution integral (CI) method of RTHS for a semiactive structural control strategy employing two large-scale 200 kN MR dampers. The formation of RTHS using the CI method is first presented, followed by details of the various components in the RTHS and a description of the implementation of the CI method for this particular test. The experimental results confirm the practicability of the CI method for conducting RTHS of complex systems.

  12. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  13. NONSTATIONARY SPATIAL MODELING OF ENVIRONMENTAL DATA USING A PROCESS CONVOLUTION APPROACH

    EPA Science Inventory

    Traditional approaches to modeling spatial processes involve the specification of the covariance structure of the field. Although such methods are straightforward to understand and effective in some situations, there are often problems in incorporating non-stationarity and in ma...

  14. Hypertrophy in the Distal Convoluted Tubule of an 11β-Hydroxysteroid Dehydrogenase Type 2 Knockout Model.

    PubMed

    Hunter, Robert W; Ivy, Jessica R; Flatman, Peter W; Kenyon, Christopher J; Craigie, Eilidh; Mullins, Linda J; Bailey, Matthew A; Mullins, John J

    2015-07-01

    Na(+) transport in the renal distal convoluted tubule (DCT) by the thiazide-sensitive NaCl cotransporter (NCC) is a major determinant of total body Na(+) and BP. NCC-mediated transport is stimulated by aldosterone, the dominant regulator of chronic Na(+) homeostasis, but the mechanism is controversial. Transport may also be affected by epithelial remodeling, which occurs in the DCT in response to chronic perturbations in electrolyte homeostasis. Hsd11b2(-/-) mice, which lack the enzyme 11β-hydroxysteroid dehydrogenase type 2 (11βHSD2) and thus exhibit the syndrome of apparent mineralocorticoid excess, provided an ideal model in which to investigate the potential for DCT hypertrophy to contribute to Na(+) retention in a hypertensive condition. The DCTs of Hsd11b2(-/-) mice exhibited hypertrophy and hyperplasia and the kidneys expressed higher levels of total and phosphorylated NCC compared with those of wild-type mice. However, the striking structural and molecular phenotypes were not associated with an increase in the natriuretic effect of thiazide. In wild-type mice, Hsd11b2 mRNA was detected in some tubule segments expressing Slc12a3, but 11βHSD2 and NCC did not colocalize at the protein level. Thus, the phosphorylation status of NCC may not necessarily equate to its activity in vivo, and the structural remodeling of the DCT in the knockout mouse may not be a direct consequence of aberrant corticosteroid signaling in DCT cells. These observations suggest that the conventional concept of mineralocorticoid signaling in the DCT should be revised to recognize the complexity of NCC regulation by corticosteroids.

  15. Hypertrophy in the Distal Convoluted Tubule of an 11β-Hydroxysteroid Dehydrogenase Type 2 Knockout Model

    PubMed Central

    Ivy, Jessica R.; Flatman, Peter W.; Kenyon, Christopher J.; Craigie, Eilidh; Mullins, Linda J.; Bailey, Matthew A.; Mullins, John J.

    2015-01-01

    Na+ transport in the renal distal convoluted tubule (DCT) by the thiazide-sensitive NaCl cotransporter (NCC) is a major determinant of total body Na+ and BP. NCC-mediated transport is stimulated by aldosterone, the dominant regulator of chronic Na+ homeostasis, but the mechanism is controversial. Transport may also be affected by epithelial remodeling, which occurs in the DCT in response to chronic perturbations in electrolyte homeostasis. Hsd11b2−/− mice, which lack the enzyme 11β-hydroxysteroid dehydrogenase type 2 (11βHSD2) and thus exhibit the syndrome of apparent mineralocorticoid excess, provided an ideal model in which to investigate the potential for DCT hypertrophy to contribute to Na+ retention in a hypertensive condition. The DCTs of Hsd11b2−/− mice exhibited hypertrophy and hyperplasia and the kidneys expressed higher levels of total and phosphorylated NCC compared with those of wild-type mice. However, the striking structural and molecular phenotypes were not associated with an increase in the natriuretic effect of thiazide. In wild-type mice, Hsd11b2 mRNA was detected in some tubule segments expressing Slc12a3, but 11βHSD2 and NCC did not colocalize at the protein level. Thus, the phosphorylation status of NCC may not necessarily equate to its activity in vivo, and the structural remodeling of the DCT in the knockout mouse may not be a direct consequence of aberrant corticosteroid signaling in DCT cells. These observations suggest that the conventional concept of mineralocorticoid signaling in the DCT should be revised to recognize the complexity of NCC regulation by corticosteroids. PMID:25349206

  16. Conductivity depth imaging of Airborne Electromagnetic data with double pulse transmitting current based on model fusion

    NASA Astrophysics Data System (ADS)

    Li, Jing; Dou, Mei; Lu, Yiming; Peng, Cong; Yu, Zining; Zhu, Kaiguang

    2017-01-01

    The airborne electromagnetic (AEM) systems have been used traditionally in mineral exploration. Typically the system transmits a single pulse waveform to detect conductive anomaly. Conductivity-depth imaging (CDI) of data is generally applied in identifying conductive targets. A CDI algorithm with double-pulse transmitting current based on model fusion is developed. The double-pulse is made up of a half-sine pulse of high power and a trapezoid pulse of low power. This CDI algorithm presents more shallow information than traditional CDI with a single pulse. The electromagnetic response with double-pulse transmitting current is calculated by linear convolution based on forward modeling. The CDI results with half-sine and trapezoid pulse are obtained by look-up table method, and the two results are fused to form a double-pulse conductivity-depth imaging result. This makes it possible to obtain accurate conductivity and depth. Tests on synthetic data demonstrate that CDI algorithm with double-pulse transmitting current based on model fusion maps a wider range of conductivities and does a better job compared with CDI with a single pulse transmitting current in reflecting the whole geological conductivity changes.

  17. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  18. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  19. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  20. A double pendulum model of tennis strokes

    NASA Astrophysics Data System (ADS)

    Cross, Rod

    2011-05-01

    The physics of swinging a tennis racquet is examined by modeling the forearm and the racquet as a double pendulum. We consider differences between a forehand and a serve, and show how they differ from the swing of a bat and a golf club. It is also shown that the swing speed of a racquet, like that of a bat or a club, depends primarily on its moment of inertia rather than on its mass.

  1. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  2. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  3. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  4. Determinate-state convolutional codes

    NASA Technical Reports Server (NTRS)

    Collins, O.; Hizlan, M.

    1991-01-01

    A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.

  5. Modeling interconnect corners under double patterning misalignment

    NASA Astrophysics Data System (ADS)

    Hyun, Daijoon; Shin, Youngsoo

    2016-03-01

    Publisher's Note: This paper, originally published on March 16th, was replaced with a corrected/revised version on March 28th. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Interconnect corners should accurately reflect the effect of misalingment in LELE double patterning process. Misalignment is usually considered separately from interconnect structure variations; this incurs too much pessimism and fails to reflect a large increase in total capacitance for asymmetric interconnect structure. We model interconnect corners by taking account of misalignment in conjunction with interconnect structure variations; we also characterize misalignment effect more accurately by handling metal pitch at both sides of a target metal independently. Identifying metal space at both sides of a target metal.

  6. Convoluted accommodation structures in folded rocks

    NASA Astrophysics Data System (ADS)

    Dodwell, T. J.; Hunt, G. W.

    2012-10-01

    A simplified variational model for the formation of convoluted accommodation structures, as seen in the hinge zones of larger-scale geological folds, is presented. The model encapsulates some important and intriguing nonlinear features, notably: infinite critical loads, formation of plastic hinges, and buckling on different length-scales. An inextensible elastic beam is forced by uniform overburden pressure and axial load into a V-shaped geometry dictated by formation of a plastic hinge. Using variational methods developed by Dodwell et al., upon which this paper leans heavily, energy minimisation leads to representation as a fourth-order nonlinear differential equation with free boundary conditions. Equilibrium solutions are found using numerical shooting techniques. Under the Maxwell stability criterion, it is recognised that global energy minimisers can exist with convoluted physical shapes. For such solutions, parallels can be drawn with some of the accommodation structures seen in exposed escarpments of real geological folds.

  7. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.

  8. Double Higgs boson production in the models with isotriplets

    SciTech Connect

    Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.

    2015-12-15

    The enhancement of double Higgs boson production in the extensions of the Standard Model with extra isotriplets is studied. It is found that in see-saw type II model decays of new heavy Higgs can contribute to the double Higgs production cross section as much as Standard Model channels. In Georgi–Machacek model the cross section can be much larger since the custodial symmetry is preserved and the strongest limitation on triplet parameters is removed.

  9. Experimental study of current loss and plasma formation in the Z machine post-hole convolute

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.

    2017-01-01

    The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.

  10. Inhibitor Discovery by Convolution ABPP.

    PubMed

    Chandrasekar, Balakumaran; Hong, Tram Ngoc; van der Hoorn, Renier A L

    2017-01-01

    Activity-based protein profiling (ABPP) has emerged as a powerful proteomic approach to study the active proteins in their native environment by using chemical probes that label active site residues in proteins. Traditionally, ABPP is classified as either comparative or competitive ABPP. In this protocol, we describe a simple method called convolution ABPP, which takes benefit from both the competitive and comparative ABPP. Convolution ABPP allows one to detect if a reduced signal observed during comparative ABPP could be due to the presence of inhibitors. In convolution ABPP, the proteomes are analyzed by comparing labeling intensities in two mixed proteomes that were labeled either before or after mixing. A reduction of labeling in the mix-and-label sample when compared to the label-and-mix sample indicates the presence of an inhibitor excess in one of the proteomes. This method is broadly applicable to detect inhibitors in proteomes against any proteome containing protein activities of interest. As a proof of concept, we applied convolution ABPP to analyze secreted proteomes from Pseudomonas syringae-infected Nicotiana benthamiana leaves to display the presence of a beta-galactosidase inhibitor.

  11. Generalized Valon Model for Double Parton Distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-06-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  12. Zebrafish tracking using convolutional neural networks

    PubMed Central

    XU, Zhiping; Cheng, Xi En

    2017-01-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable. PMID:28211462

  13. Zebrafish tracking using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xu, Zhiping; Cheng, Xi En

    2017-02-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  14. Discrete singular convolution mapping methods for solving singular boundary value and boundary layer problems

    NASA Astrophysics Data System (ADS)

    Pindza, Edson; Maré, Eben

    2017-03-01

    A modified discrete singular convolution method is proposed. The method is based on the single (SE) and double (DE) exponential transformation to speed up the convergence of the existing methods. Numerical computations are performed on a wide variety of singular boundary value and singular perturbed problems in one and two dimensions. The obtained results from discrete singular convolution methods based on single and double exponential transformations are compared with each other, and with the existing methods too. Numerical results confirm that these methods are considerably efficient and accurate in solving singular and regular problems. Moreover, the method can be applied to a wide class of nonlinear partial differential equations.

  15. Simplified Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1986-01-01

    Some complicated intermediate steps shortened or eliminated. Decoding of convolutional error-correcting digital codes simplified by new errortrellis syndrome technique. In new technique, syndrome vector not computed. Instead, advantage taken of newly-derived mathematical identities simplify decision tree, folding it back on itself into form called "error trellis." This trellis graph of all path solutions of syndrome equations. Each path through trellis corresponds to specific set of decisions as to received digits. Existing decoding algorithms combined with new mathematical identities reduce number of combinations of errors considered and enable computation of correction vector directly from data and check bits as received.

  16. A review of molecular modelling of electric double layer capacitors.

    PubMed

    Burt, Ryan; Birkett, Greg; Zhao, X S

    2014-04-14

    Electric double-layer capacitors are a family of electrochemical energy storage devices that offer a number of advantages, such as high power density and long cyclability. In recent years, research and development of electric double-layer capacitor technology has been growing rapidly, in response to the increasing demand for energy storage devices from emerging industries, such as hybrid and electric vehicles, renewable energy, and smart grid management. The past few years have witnessed a number of significant research breakthroughs in terms of novel electrodes, new electrolytes, and fabrication of devices, thanks to the discovery of innovative materials (e.g. graphene, carbide-derived carbon, and templated carbon) and the availability of advanced experimental and computational tools. However, some experimental observations could not be clearly understood and interpreted due to limitations of traditional theories, some of which were developed more than one hundred years ago. This has led to significant research efforts in computational simulation and modelling, aimed at developing new theories, or improving the existing ones to help interpret experimental results. This review article provides a summary of research progress in molecular modelling of the physical phenomena taking place in electric double-layer capacitors. An introduction to electric double-layer capacitors and their applications, alongside a brief description of electric double layer theories, is presented first. Second, molecular modelling of ion behaviours of various electrolytes interacting with electrodes under different conditions is reviewed. Finally, key conclusions and outlooks are given. Simulations on comparing electric double-layer structure at planar and porous electrode surfaces under equilibrium conditions have revealed significant structural differences between the two electrode types, and porous electrodes have been shown to store charge more efficiently. Accurate electrolyte and

  17. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  18. Helium in double-detonation models of type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Boyle, Aoife; Sim, Stuart A.; Hachinger, Stephan; Kerzendorf, Wolfgang

    2017-02-01

    The double-detonation explosion model has been considered a candidate for explaining astrophysical transients with a wide range of luminosities. In this model, a carbon-oxygen white dwarf star explodes following detonation of a surface layer of helium. One potential signature of this explosion mechanism is the presence of unburned helium in the outer ejecta, left over from the surface helium layer. In this paper we present simple approximations to estimate the optical depths of important He i lines in the ejecta of double-detonation models. We use these approximations to compute synthetic spectra, including the He i lines, for double-detonation models obtained from hydrodynamical explosion simulations. Specifically, we focus on photospheric-phase predictions for the near-infrared 10 830 Å and 2 μm lines of He i. We first consider a double detonation model with a luminosity corresponding roughly to normal SNe Ia. This model has a post-explosion unburned He mass of 0.03 M⊙ and our calculations suggest that the 2 μm feature is expected to be very weak but that the 10 830 Å feature may have modest opacity in the outer ejecta. Consequently, we suggest that a moderate-to-weak He i 10 830 Å feature may be expected to form in double-detonation explosions at epochs around maximum light. However, the high velocities of unburned helium predicted by the model ( 19 000 km s-1) mean that the He i 10 830 Å feature may be confused or blended with the C i 10 690 Å line forming at lower velocities. We also present calculations for the He i 10 830 Å and 2 μm lines for a lower mass (low luminosity) double detonation model, which has a post-explosion He mass of 0.077 M⊙. In this case, both the He i features we consider are strong and can provide a clear observational signature of the double-detonation mechanism.

  19. A large deformation viscoelastic model for double-network hydrogels

    NASA Astrophysics Data System (ADS)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  20. A Simple Double-Source Model for Interference of Capillaries

    ERIC Educational Resources Information Center

    Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua

    2012-01-01

    A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…

  1. Stacked Convolutional Denoising Auto-Encoders for Feature Representation.

    PubMed

    Du, Bo; Xiong, Wei; Wu, Jia; Zhang, Lefei; Zhang, Liangpei; Tao, Dacheng

    2016-03-16

    Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks.

  2. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    DOE PAGES

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; ...

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed

  3. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  4. Cantilever tilt causing amplitude related convolution in dynamic mode atomic force microscopy.

    PubMed

    Wang, Chunmei; Sun, Jielin; Itoh, Hiroshi; Shen, Dianhong; Hu, Jun

    2011-01-01

    It is well known that the topography in atomic force microscopy (AFM) is a convolution of the tip's shape and the sample's geometry. The classical convolution model was established in contact mode assuming a static probe, but it is no longer valid in dynamic mode AFM. It is still not well understood whether or how the vibration of the probe in dynamic mode affects the convolution. Such ignorance complicates the interpretation of the topography. Here we propose a convolution model for dynamic mode by taking into account the typical design of the cantilever tilt in AFMs, which leads to a different convolution from that in contact mode. Our model indicates that the cantilever tilt results in a dynamic convolution affected by the absolute value of the amplitude, especially in the case that corresponding contact convolution has sharp edges beyond certain angle. The effect was experimentally demonstrated by a perpendicular SiO(2)/Si super-lattice structure. Our model is useful for quantitative characterizations in dynamic mode, especially in probe characterization and critical dimension measurements.

  5. Multilabel Image Annotation Based on Double-Layer PLSA Model

    PubMed Central

    Zhang, Jing; Li, Da; Hu, Weiwei; Chen, Zhihua; Yuan, Yubo

    2014-01-01

    Due to the semantic gap between visual features and semantic concepts, automatic image annotation has become a difficult issue in computer vision recently. We propose a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA) in this paper. The new double-layer PLSA model is constructed to bridge the low-level visual features and high-level semantic concepts of images for effective image understanding. The low-level features of images are represented as visual words by Bag-of-Words model; latent semantic topics are obtained by the first layer PLSA from two aspects of visual and texture, respectively. Furthermore, we adopt the second layer PLSA to fuse the visual and texture latent semantic topics and achieve a top-layer latent semantic topic. By the double-layer PLSA, the relationships between visual features and semantic concepts of images are established, and we can predict the labels of new images by their low-level features. Experimental results demonstrate that our automatic image annotation model based on double-layer PLSA can achieve promising performance for labeling and outperform previous methods on standard Corel dataset. PMID:24999490

  6. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  7. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  8. Dosimetric comparison of Acuros XB deterministic radiation transport method with Monte Carlo and model-based convolution methods in heterogeneous media

    PubMed Central

    Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas

    2011-01-01

    Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10

  9. Rationale-Augmented Convolutional Neural Networks for Text Classification

    PubMed Central

    Zhang, Ye; Marshall, Iain; Wallace, Byron C.

    2016-01-01

    We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions. PMID:28191551

  10. Double scaling in tensor models with a quartic interaction

    NASA Astrophysics Data System (ADS)

    Dartois, Stéphane; Gurau, Razvan; Rivasseau, Vincent

    2013-09-01

    In this paper we identify and analyze in detail the subleading contributions in the 1 /N expansion of random tensors, in the simple case of a quartically interacting model. The leading order for this 1 /N expansion is made of graphs, called melons, which are dual to particular triangulations of the D-dimensional sphere, closely related to the "stacked" triangulations. For D < 6 the subleading behavior is governed by a larger family of graphs, hereafter called cherry trees, which are also dual to the D-dimensional sphere. They can be resummed explicitly through a double scaling limit. In sharp contrast with random matrix models, this double scaling limit is stable. Apart from its unexpected upper critical dimension 6, it displays a singularity at fixed distance from the origin and is clearly the first step in a richer set of yet to be discovered multi-scaling limits.

  11. Double porosity modeling in elastic wave propagation for reservoir characterization

    SciTech Connect

    Berryman, J. G., LLNL

    1998-06-01

    Phenomenological equations for the poroelastic behavior of a double porosity medium have been formulated and the coefficients in these linear equations identified. The generalization from a single porosity model increases the number of independent coefficients from three to six for an isotropic applied stress. In a quasistatic analysis, the physical interpretations are based upon considerations of extremes in both spatial and temporal scales. The limit of very short times is the one most relevant for wave propagation, and in this case both matrix porosity and fractures behave in an undrained fashion. For the very long times more relevant for reservoir drawdown,the double porosity medium behaves as an equivalent single porosity medium At the macroscopic spatial level, the pertinent parameters (such as the total compressibility) may be determined by appropriate field tests. At the mesoscopic scale pertinent parameters of the rock matrix can be determined directly through laboratory measurements on core, and the compressibility can be measured for a single fracture. We show explicitly how to generalize the quasistatic results to incorporate wave propagation effects and how effects that are usually attributed to squirt flow under partially saturated conditions can be explained alternatively in terms of the double-porosity model. The result is therefore a theory that generalizes, but is completely consistent with, Biot`s theory of poroelasticity and is valid for analysis of elastic wave data from highly fractured reservoirs.

  12. Parallel architectures for computing cyclic convolutions

    NASA Technical Reports Server (NTRS)

    Yeh, C.-S.; Reed, I. S.; Truong, T. K.

    1983-01-01

    In the paper two parallel architectural structures are developed to compute one-dimensional cyclic convolutions. The first structure is based on the Chinese remainder theorem and Kung's pipelined array. The second structure is a direct mapping from the mathematical definition of a cyclic convolution to a computational architecture. To compute a d-point cyclic convolution the first structure needs d/2 inner product cells, while the second structure and Kung's linear array require d cells. However, to compute a cyclic convolution, the second structure requires less time than both the first structure and Kung's linear array. Another application of the second structure is to multiply a Toeplitz matrix by a vector. A table is listed to compare these two structures and Kung's linear array. Both structures are simple and regular and are therefore suitable for VLSI implementation.

  13. Two potential quark models for double heavy baryons

    SciTech Connect

    Puchkov, A. M.; Kozhedub, A. V.

    2016-01-22

    Baryons containing two heavy quarks (QQ{sup ′} q) are treated in the Born-Oppenheimer approximation. Two non-relativistic potential models are proposed, in which the Schrödinger equation admits a separation of variables in prolate and oblate spheroidal coordinates, respectively. In the first model, the potential is equal to the sum of Coulomb potentials of the two heavy quarks, separated from each other by a distance - R and linear potential of confinement. In the second model the center distance parameter R is assumed to be purely imaginary. In this case, the potential is defined by the two-sheeted mapping with singularities being concentrated on a circle rather than at separate points. Thus, in the first model diquark appears as a segment, and in the second - as a circle. In this paper we calculate the mass spectrum of double heavy baryons in both models, and compare it with previous results.

  14. An effective mesoscopic model of double-stranded DNA.

    PubMed

    Jeon, Jae-Hyung; Sung, Wokyung

    2014-01-01

    Watson and Crick's epochal presentation of the double helix structure in 1953 has paved the way to intense exploration of DNA's vital functions in cells. Also, recent advances of single molecule techniques have made it possible to probe structures and mechanics of constrained DNA at length scales ranging from nanometers to microns. There have been a number of atomistic scale quantum chemical calculations or molecular level simulations, but they are too computationally demanding or analytically unfeasible to describe the DNA conformation and mechanics at mesoscopic levels. At micron scales, on the other hand, the wormlike chain model has been very instrumental in describing analytically the DNA mechanics but lacks certain molecular details that are essential in describing the hybridization, nano-scale confinement, and local denaturation. To fill this fundamental gap, we present a workable and predictive mesoscopic model of double-stranded DNA where the nucleotides beads constitute the basic degrees of freedom. With the inter-strand stacking given by an interaction between diagonally opposed monomers, the model explains with analytical simplicity the helix formation and produces a generalized wormlike chain model with the concomitant large bending modulus given in terms of the helical structure and stiffness. It also explains how the helical conformation undergoes overstretch transition to the ladder-like conformation at a force plateau, in agreement with the experiment.

  15. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  16. Generalized double-gradient model of flapping oscillations: Oblique waves

    NASA Astrophysics Data System (ADS)

    Korovinskiy, D. B.; Kiehas, S. A.

    2016-09-01

    The double-gradient model of flapping oscillations is generalized for oblique plane waves, propagating in the equatorial plane. It is found that longitudinal propagation (ky = 0) is prohibited, while transversal (kx = 0) or nearly transversal waves should possess a maximum frequency, diminishing with the reduction of | k y / k x | ratio. It turns out that the sausage mode may propagate in a narrow range of directions only, | k y / k x | ≫ 1 . A simple analytical expression for the dispersion relation of the kink mode, valid in most part of wave numbers range, | k y / k x | < 9 , is derived.

  17. Convolution of large 3D images on GPU and its decomposition

    NASA Astrophysics Data System (ADS)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  18. Analytical threshold voltage modeling of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Balraj; Kumar, Sanjay; Singh, Kunal; Jit, Satyabrata

    2017-04-01

    Two dimensional threshold voltage model of ion-implanted strained-Si double-material double-gate MOSFETs has been done based on the solution of two dimensional Poisson's equation in the channel region using the parabolic approximation method. Novelty of the proposed device structure lies in the amalgamation of the advantages of both the strained-Si channel and double-material double-gate structure with a vertical Gaussian-like doping profile. The effects of different device parameters (such as device channel length, gate length ratios, germanium mole fraction) and doping parameters (such as projected range, straggle parameter) on threshold voltage of the proposed structure have been investigated. It is observed that the subthreshold performance of the device can be improved by simply controlling the doping parameters while maintaining other device parameters constant. The modeling results show a good agreement with the numerical simulation data obtained by using ATLAS™, a 2D device simulator from SILVACO.

  19. Analytical threshold voltage modeling of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Balraj; Kumar, Sanjay; Singh, Kunal; Jit, Satyabrata

    2016-09-01

    Two dimensional threshold voltage model of ion-implanted strained-Si double-material double-gate MOSFETs has been done based on the solution of two dimensional Poisson's equation in the channel region using the parabolic approximation method. Novelty of the proposed device structure lies in the amalgamation of the advantages of both the strained-Si channel and double-material double-gate structure with a vertical Gaussian-like doping profile. The effects of different device parameters (such as device channel length, gate length ratios, germanium mole fraction) and doping parameters (such as projected range, straggle parameter) on threshold voltage of the proposed structure have been investigated. It is observed that the subthreshold performance of the device can be improved by simply controlling the doping parameters while maintaining other device parameters constant. The modeling results show a good agreement with the numerical simulation data obtained by using ATLAS™, a 2D device simulator from SILVACO.

  20. Investigating GPDs in the framework of the double distribution model

    NASA Astrophysics Data System (ADS)

    Nazari, F.; Mirjalili, A.

    2016-06-01

    In this paper, we construct the generalized parton distribution (GPD) in terms of the kinematical variables x, ξ, t, using the double distribution model. By employing these functions, we could extract some quantities which makes it possible to gain a three-dimensional insight into the nucleon structure function at the parton level. The main objective of GPDs is to combine and generalize the concepts of ordinary parton distributions and form factors. They also provide an exclusive framework to describe the nucleons in terms of quarks and gluons. Here, we first calculate, in the Double Distribution model, the GPD based on the usual parton distributions arising from the GRV and CTEQ phenomenological models. Obtaining quarks and gluons angular momenta from the GPD, we would be able to calculate the scattering observables which are related to spin asymmetries of the produced quarkonium. These quantities are represented by AN and ALS. We also calculate the Pauli and Dirac form factors in deeply virtual Compton scattering. Finally, in order to compare our results with the existing experimental data, we use the difference of the polarized cross-section for an initial longitudinal leptonic beam and unpolarized target particles (ΔσLU). In all cases, our obtained results are in good agreement with the available experimental data.

  1. Three-Triplet Model with Double SU(3) Symmetry

    DOE R&D Accomplishments Database

    Han, M. Y.; Nambu, Y.

    1965-01-01

    With a view to avoiding some of the kinematical and dynamical difficulties involved in the single triplet quark model, a model for the low lying baryons and mesons based on three triplets with integral charges is proposed, somewhat similar to the two-triplet model introduced earlier by one of us (Y. N.). It is shown that in a U(3) scheme of triplets with integral charges, one is naturally led to three triplets located symmetrically about the origin of I{sub 3} - Y diagram under the constraint that Nishijima-Gell-Mann relation remains intact. A double SU(3) symmetry scheme is proposed in which the large mass splittings between different representations are ascribed to one of the SU(3), while the other SU(3) is the usual one for the mass splittings within a representation of the first SU(3).

  2. Event Discrimination using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Menon, Hareesh; Hughes, Richard; Daling, Alec; Winer, Brian

    2017-01-01

    Convolutional Neural Networks (CNNs) are computational models that have been shown to be effective at classifying different types of images. We present a method to use CNNs to distinguish events involving the production of a top quark pair and a Higgs boson from events involving the production of a top quark pair and several quark and gluon jets. To do this, we generate and simulate data using MADGRAPH and DELPHES for a general purpose LHC detector at 13 TeV. We produce images using a particle flow algorithm by binning the particles geometrically based on their position in the detector and weighting the bins by the energy of each particle within each bin, and by defining channels based on particle types (charged track, neutral hadronic, neutral EM, lepton, heavy flavor). Our classification results are competitive with standard machine learning techniques. We have also looked into the classification of the substructure of the events, in a process known as scene labeling. In this context, we look for the presence of boosted objects (such as top quarks) with substructure encompassed within single jets. Preliminary results on substructure classification will be presented.

  3. The Double Counting Problem in Neighborhood Scale Air Quality Modeling

    NASA Astrophysics Data System (ADS)

    Du, S.; Hughes, V.; Woodhouse, L.; Servin, A.

    2004-12-01

    Air quality varies considerably within megacities. In certain neighborhoods concentrations of toxic air contaminants (TACs) can be appreciably higher than that in other neighborhoods of the same city. These pockets of high concentrations are associated with both transport of TACs from other areas and local emissions. In order to assess the health risks imposed by TACs at neighborhood scale and to develop strategies of abatement, neighborhood scale air quality modeling is needed. In 1999, the California Air Resources Board (ARB) established the Neighborhood Assessment Program (NAP) - a program designed to develop assessment tools for evaluating and understanding air quality in California communities. As part of the Neighborhood Assessment Program, ARB is conducting research on neighborhood-scale modeling methodologies. Two criteria are suggested to select a neighborhood scale air quality modeling system that can be used to assess concentrations of TACs: scientific soundness and balancing computational requirements. The latter criterion ensures that as many interested parties as possible can participate the process of air quality modeling so that they have a better understanding of air quality issues and make best use of air quality modeling results in their neighborhoods. Based on these two selection criteria a hybrid approach is recommended. This hybrid approach is a combination of using both a regional scale air quality model to assess the contributions from sources that are not located within the neighborhood of interest and a microscale model to assess the impact from the local sources that are within the neighborhood. However, one of the modeling system selection criteria, balancing computational requirements, dictates that all sources (both within and outside the neighborhood of interest) must be included in the regional scale modeling. A potential problem, referred to as double counting, arises because some local sources are included in both regional and

  4. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  5. A Double Scattering Analytical Model For Elastic Recoil Detection Analysis

    SciTech Connect

    Barradas, N. P.; Lorenz, K.; Alves, E.; Darakchieva, V.

    2011-06-01

    We present an analytical model for calculation of double scattering in elastic recoil detection measurements. Only events involving the beam particle and the recoil are considered, i.e. 1) an ion scatters off a target element and then produces a recoil, and 2) an ion produces a recoil which then scatters off a target element. Events involving intermediate recoils are not considered, i.e. when the primary ion produces a recoil which then produces a second recoil. If the recoil element is also present in the stopping foil, recoil events in the stopping foil are also calculated. We included the model in the standard code for IBA data analysis NDF, and applied it to the measurement of hydrogen in Si.

  6. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  7. Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2013-01-01

    We give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. For proper étale groupoids, Tu and Xu (Adv Math 207(2):455-483, 2006) provide a map between the periodic cyclic cohomology of a gerbe-twisted convolution algebra and twisted cohomology groups which is similar to the construction of Mathai and Stevenson (Adv Math 200(2):303-335, 2006). When the groupoid is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial techniques to construct a simplicial curvature 3-form representing the class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial curvature 3-form to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  8. Astronomical Image Subtraction by Cross-Convolution

    NASA Astrophysics Data System (ADS)

    Yuan, Fang; Akerlof, Carl W.

    2008-04-01

    In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

  9. Medical image fusion using the convolution of Meridian distributions.

    PubMed

    Agrawal, Mayank; Tsakalides, Panagiotis; Achim, Alin

    2010-01-01

    The aim of this paper is to introduce a novel non-Gaussian statistical model-based approach for medical image fusion based on the Meridian distribution. The paper also includes a new approach to estimate the parameters of generalized Cauchy distribution. The input images are first decomposed using the Dual-Tree Complex Wavelet Transform (DT-CWT) with the subband coefficients modelled as Meridian random variables. Then, the convolution of Meridian distributions is applied as a probabilistic prior to model the fused coefficients, and the weights used to combine the source images are optimised via Maximum Likelihood (ML) estimation. The superior performance of the proposed method is demonstrated using medical images.

  10. Colonoscopic polyp detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty

    2016-03-01

    Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report

  11. Double resonance in the infinite-range quantum Ising model.

    PubMed

    Han, Sung-Guk; Um, Jaegon; Kim, Beom Jun

    2012-08-01

    We study quantum resonance behavior of the infinite-range kinetic Ising model at zero temperature. Numerical integration of the time-dependent Schrödinger equation in the presence of an external magnetic field in the z direction is performed at various transverse field strengths g. It is revealed that two resonance peaks occur when the energy gap matches the external driving frequency at two distinct values of g, one below and the other above the quantum phase transition. From the similar observations already made in classical systems with phase transitions, we propose that the double resonance peaks should be a generic feature of continuous transitions, for both quantum and classical many-body systems.

  12. A hybrid double-observer sightability model for aerial surveys

    USGS Publications Warehouse

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  13. Continuous speech recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  14. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  15. Effectiveness of Convolutional Code in Multipath Underwater Acoustic Channel

    NASA Astrophysics Data System (ADS)

    Park, Jihyun; Seo, Chulwon; Park, Kyu-Chil; Yoon, Jong Rak

    2013-07-01

    The forward error correction (FEC) is achieved by increasing redundancy of information. Convolutional coding with Viterbi decoding is a typical FEC technique in channel corrupted by additive white gaussian noise. But the FEC effectiveness of convolutional code is questioned in multipath frequency selective fading channel. In this paper, how convolutional code works in multipath channel in underwater, is examined. Bit error rates (BER) with and without 1/2 convolutional code are analyzed based on channel bandwidth which is frequency selectivity parameter. It is found that convolution code performance is well matched in non selective channel and also effective in selective channel.

  16. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  17. Applying the Post-Modern Double ABC-X Model to Family Food Insecurity

    ERIC Educational Resources Information Center

    Hutson, Samantha; Anderson, Melinda; Swafford, Melinda

    2015-01-01

    This paper develops the argument that using the Double ABC-X model in family and consumer sciences (FCS) curricula is a way to educate nutrition and dietetics students regarding a family's perceptions of food insecurity. The Double ABC-X model incorporates ecological theory as a basis to explain family stress and the resulting adjustment and…

  18. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks.

  19. Digital Correlation By Optical Convolution/Correlation

    NASA Astrophysics Data System (ADS)

    Trimble, Joel; Casasent, David; Psaltis, Demetri; Caimi, Frank; Carlotto, Mark; Neft, Deborah

    1980-12-01

    Attention is given to various methods by which the accuracy achieveable and the dynamic range requirements of an optical computer can be enhanced. A new time position coding acousto-optic technique for optical residue arithmetic processing is presented and experimental demonstration is included. Major attention is given to the implementation of a correlator operating on digital or decimal encoded signals. Using a convolution description of multiplication, we realize such a correlator by optical convolution in one dimension and optical correlation in the other dimension of a optical system. A coherent matched spatial filter system operating on digital encoded signals, a noncoherent processor operating on complex-valued digital-encoded data, and a real-time multi-channel acousto-optic system for such operations are described and experimental verifications are included.

  20. Performance of convolutionally coded unbalanced QPSK systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1980-01-01

    An evaluation is presented of the performance of three representative convolutionally coded unbalanced quadri-phase-shift-keying (UQPSK) systems in the presence of noisy carrier reference and crosstalk. The use of a coded UQPSK system for transmitting two telemetry data streams with different rates and different powers has been proposed for the Venus Orbiting Imaging Radar mission. Analytical expressions for bit error rates in the presence of a noisy carrier phase reference are derived for three representative cases: (1) I and Q channels are coded independently; (2) I channel is coded, Q channel is uncoded; and (3) I and Q channels are coded by a common 1/2 code. For rate 1/2 convolutional codes, QPSK modulation can be used to reduce the bandwidth requirement.

  1. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  2. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  3. A Construction of MDS Quantum Convolutional Codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2015-09-01

    In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.

  4. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  5. Multichannel Convolutional Neural Network for Biological Relation Extraction

    PubMed Central

    Quan, Chanqin; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores. PMID:28053977

  6. [Application of numerical convolution in in vivo/in vitro correlation research].

    PubMed

    Yue, Peng

    2009-01-01

    This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.

  7. Blind separation of convolutive sEMG mixtures based on independent vector analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaomei; Guo, Yina; Tian, Wenyan

    2015-12-01

    An independent vector analysis (IVA) method base on variable-step gradient algorithm is proposed in this paper. According to the sEMG physiological properties, the IVA model is applied to the frequency-domain separation of convolutive sEMG mixtures to extract motor unit action potentials information of sEMG signals. The decomposition capability of proposed method is compared to the one of independent component analysis (ICA), and experimental results show the variable-step gradient IVA method outperforms ICA in blind separation of convolutive sEMG mixtures.

  8. Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.

    PubMed

    Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua

    2017-03-27

    Deep convolutional neural network models pretrained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.

  9. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network

    PubMed Central

    Haj-Hassan, Hawraa; Chaddad, Ahmad; Harkouss, Youssef; Desrosiers, Christian; Toews, Matthew; Tanougast, Camel

    2017-01-01

    Background: Colorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca). Methods: Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. Results: An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Conclusions: Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest.

  10. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  11. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  12. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  13. Learning to Generate Chairs, Tables and Cars with Convolutional Networks.

    PubMed

    Dosovitskiy, Alexey; Springenberg, Jost; Tatarchenko, Maxim; Brox, Thomas

    2016-05-12

    We train generative 'up-convolutional' neural networks which are able to generate images of objects given object style, viewpoint, and color. We train the networks on rendered 3D models of chairs, tables, and cars. Our experiments show that the networks do not merely learn all images by heart, but rather find a meaningful representation of 3D models allowing them to assess the similarity of different models, interpolate between given views to generate the missing ones, extrapolate views, and invent new objects not present in the training set by recombining training instances, or even two different object classes. Moreover, we show that such generative networks can be used to find correspondences between different objects from the dataset, outperforming existing approaches on this task.

  14. Human Parsing with Contextualized Convolutional Neural Network.

    PubMed

    Liang, Xiaodan; Xu, Chunyan; Shen, Xiaohui; Yang, Jianchao; Tang, Jinhui; Lin, Liang; Yan, Shuicheng

    2016-03-02

    In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, semantic edge context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic information and the local fine details across different convolutional layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global imagelevel context. Third, semantic edge context is further incorporated into Co-CNN, where the high-level semantic boundaries are leveraged to guide pixel-wise labeling. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [1] reaches 81:72% by Co-CNN, significantly higher than 62:81% and 64:38% by the state-of-the-art algorithms, MCNN [2] and ATR [1], respectively. By utilizing our newly collected large dataset for training, our Co-CNN can achieve 85:36% in F-1 score.

  15. Applications of convolution voltammetry in electroanalytical chemistry.

    PubMed

    Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie

    2014-02-18

    The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.

  16. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-01-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.

  17. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  18. Semi-analytical model for quasi-double-layer surface electrode ion traps

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Shuming; Wang, Yaohua

    2016-11-01

    To realize scale quantum processors, the surface-electrode ion trap is an effective scaling approach, including single-layer, double-layer, and quasi-double-layer traps. To calculate critical trap parameters such as the trap center and trap depth, the finite element method (FEM) simulation was widely used, however, it is always time consuming. Moreover, the FEM simulation is also incapable of exhibiting the direct relationship between the geometry dimension and these parameters. To eliminate the problems above, House and Madsen et al. have respectively provided analytic models for single-layer traps and double-layer traps. In this paper, we propose a semi-analytical model for quasi-double-layer traps. This model can be applied to calculate the important parameters above of the ion trap in the trap design process. With this model, we can quickly and precisely find the optimum geometry design for trap electrodes in various cases.

  19. A 3D Model of Double-Helical DNA Showing Variable Chemical Details

    ERIC Educational Resources Information Center

    Cady, Susan G.

    2005-01-01

    Since the first DNA model was created approximately 50 years ago using molecular models, students and teachers have been building simplified DNA models from various practical materials. A 3D double-helical DNA model, made by placing beads on a wire and stringing beads through holes in plastic canvas, is described. Suggestions are given to enhance…

  20. Convolutional fountain distribution over fading wireless channels

    NASA Astrophysics Data System (ADS)

    Usman, Mohammed

    2012-08-01

    Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.

  1. NUCLEI SEGMENTATION VIA SPARSITY CONSTRAINED CONVOLUTIONAL REGRESSION

    PubMed Central

    Zhou, Yin; Chang, Hang; Barner, Kenneth E.; Parvin, Bahram

    2017-01-01

    Automated profiling of nuclear architecture, in histology sections, can potentially help predict the clinical outcomes. However, the task is challenging as a result of nuclear pleomorphism and cellular states (e.g., cell fate, cell cycle), which are compounded by the batch effect (e.g., variations in fixation and staining). Present methods, for nuclear segmentation, are based on human-designed features that may not effectively capture intrinsic nuclear architecture. In this paper, we propose a novel approach, called sparsity constrained convolutional regression (SCCR), for nuclei segmentation. Specifically, given raw image patches and the corresponding annotated binary masks, our algorithm jointly learns a bank of convolutional filters and a sparse linear regressor, where the former is used for feature extraction, and the latter aims to produce a likelihood for each pixel being nuclear region or background. During classification, the pixel label is simply determined by a thresholding operation applied on the likelihood map. The method has been evaluated using the benchmark dataset collected from The Cancer Genome Atlas (TCGA). Experimental results demonstrate that our method outperforms traditional nuclei segmentation algorithms and is able to achieve competitive performance compared to the state-of-the-art algorithm built upon human-designed features with biological prior knowledge. PMID:28101301

  2. Convolution Inequalities for the Boltzmann Collision Operator

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.

    2010-09-01

    We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.

  3. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  4. New quantum MDS-convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Li, Fengwei; Yue, Qin

    2015-12-01

    In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.

  5. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  6. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  7. Modeling and simulation of a double auction artificial financial market

    NASA Astrophysics Data System (ADS)

    Raberto, Marco; Cincotti, Silvano

    2005-09-01

    We present a double-auction artificial financial market populated by heterogeneous agents who trade one risky asset in exchange for cash. Agents issue random orders subject to budget constraints. The limit prices of orders may depend on past market volatility. Limit orders are stored in the book whereas market orders give immediate birth to transactions. We show that fat tails and volatility clustering are recovered by means of very simple assumptions. We also investigate two important stylized facts of the limit order book, i.e., the distribution of waiting times between two consecutive transactions and the instantaneous price impact function. We show both theoretically and through simulations that if the order waiting times are exponentially distributed, even trading waiting times are also exponentially distributed.

  8. On improvements of Double Beta Decay using FQTDA Model

    NASA Astrophysics Data System (ADS)

    de Oliveira, L.; Samana, A. R.; Krmpotic, F.; Mariano, A. E.; Barbero, C. A.

    2015-07-01

    The Quasiparticle Tamm-Dancoff Approximation (QTDA) is applied to describe the nuclear double beta decay with two neutrinos. Several serious inconveniences found in the Quasiparticle Random Phase Approximation (QRPA) are not present in the QTDA, as such as the ambiguity in treating the intermediary states, and further approximations necessary for evaluation of the nuclear matrix elements (NMEs) or, the extreme sensitivity of NME with the ratio between the pn and pp + nn pairings. Some years ago, the decay 48Ca → 48Ti was discussed within the particle-hole limit of QTDA. We found some mismatch in the numerical calculations when the full QTDA was being implemented, and a new performance in the particle-hole limit of QTDA is required to guarantee the fidelity of the approximation.

  9. Coupled cluster Green function: Model involving single and double excitations

    SciTech Connect

    Bhaskaran-Nair, Kiran; Kowalski, Karol; Shelton, William A.

    2016-04-14

    In this paper we report on the parallel implementation of the coupled-cluster (CC) Green function formulation (GF-CC) employing single and double excitations in the cluster operator (GF-CCSD). The detailed description of the underlying algorithm is provided, including the structure of ionization-potential- and electron-affinity-type intermediate tensors which enable to formulate GF-CC approach in a computationally feasible form. Several examples including calculations of ionization-potentials and electron a*ffinities for benchmark systems, which are juxtaposed against the experimental values, provide an illustration of the accuracies attainable in the GFCCSD simulations. We also discuss the structure of the CCSD self energies and discuss approximation that are geared to reduce the computational cost while maintaining the pole structure of the full GF-CCSD approach.

  10. Neutrinoless double beta decay in the left-right symmetric models for linear seesaw

    NASA Astrophysics Data System (ADS)

    Gu, Pei-Hong

    2016-09-01

    In a class of left-right symmetric models for linear seesaw, a neutrinoless double beta decay induced by the left- and right-handed charged currents together will only depend on the breaking details of left-right and electroweak symmetries. This neutrinoless double beta decay can reach the experimental sensitivities if the right-handed charged gauge boson is below the 100TeV scale.

  11. Semileptonic decays of double heavy baryons in a relativistic constituent three-quark model

    SciTech Connect

    Faessler, Amand; Gutsche, Thomas; Lyubovitskij, Valery E.; Ivanov, Mikhail A.; Koerner, Juergen G.

    2009-08-01

    We study the semileptonic decays of double-heavy baryons using a manifestly Lorentz covariant constituent three-quark model. We present complete results on transition form factors between double-heavy baryons for finite values of the heavy quark/baryon masses and in the heavy quark symmetry limit, which is valid at and close to zero recoil. Decay rates are calculated and compared to each other in the full theory, keeping masses finite, and also in the heavy quark limit.

  12. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    SciTech Connect

    Gao Yajun

    2008-08-15

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  13. Convolutional neural network for pottery retrieval

    NASA Astrophysics Data System (ADS)

    Benhabiles, Halim; Tabia, Hedi

    2017-01-01

    The effectiveness of the convolutional neural network (CNN) has already been demonstrated in many challenging tasks of computer vision, such as image retrieval, action recognition, and object classification. This paper specifically exploits CNN to design local descriptors for content-based retrieval of complete or nearly complete three-dimensional (3-D) vessel replicas. Based on vector quantization, the designed descriptors are clustered to form a shape vocabulary. Then, each 3-D object is associated to a set of clusters (words) in that vocabulary. Finally, a weighted vector counting the occurrences of every word is computed. The reported experimental results on the 3-D pottery benchmark show the superior performance of the proposed method.

  14. Robust smile detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  15. Modelling the nonlinear behaviour of double walled carbon nanotube based resonator with curvature factors

    NASA Astrophysics Data System (ADS)

    Patel, Ajay M.; Joshi, Anand Y.

    2016-10-01

    This paper deals with the nonlinear vibration analysis of a double walled carbon nanotube based mass sensor with curvature factor or waviness, which is doubly clamped at a source and a drain. Nonlinear vibrational behaviour of a double-walled carbon nanotube excited harmonically near its primary resonance is considered. The double walled carbon nanotube is harmonically excited by the addition of an excitation force. The modelling involves stretching of the mid plane and damping as per phenomenon. The equation of motion involves four nonlinear terms for inner and outer tubes of DWCNT due to the curved geometry and the stretching of the central plane due to the boundary conditions. The vibrational behaviour of the double walled carbon nanotube with different surface deviations along its axis is analyzed in the context of the time response, Poincaré maps and Fast Fourier Transformation diagrams. The appearance of instability and chaos in the dynamic response is observed as the curvature factor on double walled carbon nanotube is changed. The phenomenon of Periodic doubling and intermittency are observed as the pathway to chaos. The regions of periodic, sub-harmonic and chaotic behaviour are clearly seen to be dependent on added mass and the curvature factors in the double walled carbon nanotube. Poincaré maps and frequency spectra are used to explicate and to demonstrate the miscellany of the system behaviour. With the increase in the curvature factor system excitations increases and results in an increase of the vibration amplitude with reduction in excitation frequency.

  16. Dynamic modelling of a double-pendulum gantry crane system incorporating payload

    SciTech Connect

    Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.

    2011-06-20

    The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.

  17. Dynamic Modelling of a Double-Pendulum Gantry Crane System Incorporating Payload

    NASA Astrophysics Data System (ADS)

    Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.

    2011-06-01

    The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.

  18. Neutrinoless Double Beta Nuclear Matrix Elements Around Mass 80 in the Nuclear Shell Model

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Naotaka; Higashiyama, Koji; Taguchi, Daisuke; Teruya, Eri

    The observation of the neutrinoless double-beta decay can determine whether the neutrino is a Majorana particle or not. In its theoretical nuclear side it is particularly important to estimate three types of nuclear matrix elements, namely, Fermi (F), Gamow-Teller (GT), and tensor (T) types matrix elements. The shell model calculations and also the pair-truncated shell model calculations are carried out to check the model dependence on nuclear matrix elements. In this work the neutrinoless double-beta decay for mass A = 82 nuclei is studied. It is found that the matrix elements are quite sensitive to the ground state wavefunctions.

  19. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  20. Molecular modeling of layered double hydroxide intercalated with benzoate, modeling and experiment.

    PubMed

    Kovár, Petr; Pospísil, M; Nocchetti, M; Capková, P; Melánová, Klára

    2007-08-01

    The structure of Zn4Al2 Layered Double Hydroxide intercalated with benzencarboxylate (C6H5COO-) was solved using molecular modeling combined with experiment (X-ray powder diffraction, IR spectroscopy, TG measurements). Molecular modeling revealed the arrangement of guest molecules, layer stacking, water content and water location in the interlayer space of the host structure. Molecular modeling using empirical force field was carried out in Cerius(2) modeling environment. Results of modeling were confronted with experiment that means comparing the calculated and measured diffraction pattern and comparing the calculated water content with the thermogravimetric value. Good agreement has been achieved between calculated and measured basal spacing: d(calc) = 15.3 A and d(exp) = 15.5 A. The number of water molecules per formula unit (6H2O per Zn4Al2(OH)12) obtained by modeling (i.e., corresponding to the energy minimum) agrees with the water content estimated by thermogravimetry. The long axis of guest molecules are almost perpendicular to the LDH layers, anchored to the host layers via COO- groups. Mutual orientation of benzoate ring planes in the interlayer space keeps the parquet arrangement. Water molecules are roughly arranged in planes adjacent to host layers together with COO- groups.

  1. A test of the double-shearing model of flow for granular materials

    USGS Publications Warehouse

    Savage, J.C.; Lockner, D.A.

    1997-01-01

    The double-shearing model of flow attributes plastic deformation in granular materials to cooperative slip on conjugate Coulomb shears (surfaces upon which the Coulomb yield condition is satisfied). The strict formulation of the double-shearing model then requires that the slip lines in the material coincide with the Coulomb shears. Three different experiments that approximate simple shear deformation in granular media appear to be inconsistent with this strict formulation. For example, the orientation of the principal stress axes in a layer of sand driven in steady, simple shear was measured subject to the assumption that the Coulomb failure criterion was satisfied on some surfaces (orientation unspecified) within the sand layer. The orientation of the inferred principal compressive axis was then compared with the orientations predicted by the double-shearing model. The strict formulation of the model [Spencer, 1982] predicts that the principal stress axes should rotate in a sense opposite to that inferred from the experiments. A less restrictive formulation of the double-shearing model by de Josselin de Jong [1971] does not completely specify the solution but does prescribe limits on the possible orientations of the principal stress axes. The orientations of the principal compression axis inferred from the experiments are probably within those limits. An elastoplastic formulation of the double-shearing model [de Josselin de Jong, 1988] is reasonably consistent with the experiments, although quantitative agreement was not attained. Thus we conclude that the double-shearing model may be a viable law to describe deformation of granular materials, but the macroscopic slip surfaces will not in general coincide with the Coulomb shears.

  2. Double and single pion photoproduction within a dynamical coupled-channels model

    SciTech Connect

    Hiroyuki Kamano; Julia-Diaz, Bruno; Lee, T. -S. H.; Matsuyama, Akihiko; Sato, Toru

    2009-12-16

    Within a dynamical coupled-channels model which has already been fixed from analyzing the data of the πN → πN and γN → πN reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W < 1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects due to the direct γN → ππN mechanism, the interplay between the resonant and non-resonant amplitudes, and the coupled-channels effects. As a result, the model parameters which can be determined most effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.

  3. Double and single pion photoproduction within a dynamical coupled-channels model

    DOE PAGES

    Hiroyuki Kamano; Julia-Diaz, Bruno; Lee, T. -S. H.; ...

    2009-12-16

    Within a dynamical coupled-channels model which has already been fixed from analyzing the data of the πN → πN and γN → πN reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W < 1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects due to the direct γN → ππN mechanism, the interplay between the resonant and non-resonant amplitudes, and the coupled-channels effects. As a result, the model parameters which can be determined mostmore » effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.« less

  4. Finite Element Modeling and Exploration of Double Hearing Protection Systems

    DTIC Science & Technology

    2006-02-10

    broad frequency range were determined from this method. The elastomeric rubber material was cut into small wafers of 2 to 5mm thickness. A mass was... material (being 0.1 for soft elastomeric foams), G and E are the shear and elastic moduli of the material , respectively, D is the diameter of the...and to investigate the behavior of the modeled system. The foam earplug material properties for the finite element model are required in the same shear

  5. A SPICE model of double-sided Si microstrip detectors

    SciTech Connect

    Candelori, A.; Paccagnella, A. |; Bonin, F.

    1996-12-31

    We have developed a SPICE model for the ohmic side of AC-coupled Si microstrip detectors with interstrip isolation via field plates. The interstrip isolation has been measured in various conditions by varying the field plate voltage. Simulations have been compared with experimental data in order to determine the values of the model parameters for different voltages applied to the field plates. The model is able to predict correctly the frequency dependence of the coupling between adjacent strips. Furthermore, we have used such model for the study of the signal propagation along the detector when a current signal is injected in a strip. Only electrical coupling is considered here, without any contribution due to charge sharing derived from carrier diffusion. For this purpose, the AC pads of the strips have been connected to a read-out electronics and the current signal has been injected into a DC pad. Good agreement between measurements and simulations has been reached for the central strip and the first neighbors. Experimental tests and computer simulations have been performed for four different strip and field plate layouts, in order to investigate how the detector geometry affects the parameters of the SPICE model and the signal propagation.

  6. Programmable convolution via the chirp Z-transform with CCD's

    NASA Technical Reports Server (NTRS)

    Buss, D. D.

    1977-01-01

    Technique filtering by convolution in frequency domain rather than in time domain presents possible solution to problem of programmable transversal filters. Process is accomplished through utilization of chip z-transform (CZT) with charge-coupled devices

  7. Spiral to ferromagnetic transition in a Kondo lattice model with a double-well potential

    NASA Astrophysics Data System (ADS)

    Caro, R. C.; Franco, R.; Silva-Valencia, J.

    2016-02-01

    Using the density matrix renormalization group method, we study a system of 171Yb atoms confined in a one-dimensional optical lattice. The atoms in the 1So state undergo a double-well potential, whereas the atoms in the 3P0 state are localized. This system is modelled by the Kondo lattice model plus a double-well potential for the free carries. We obtain phase diagrams composed of ferromagnetic and spiral phases, where the critical points always increase with the interwell tunneling parameter. We conclude that this quantum phase transition can be tuned by the double-well potential parameters as well as by the common parameters: local coupling and density.

  8. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  9. Accelerated unsteady flow line integral convolution.

    PubMed

    Liu, Zhanping; Moorhead, Robert J

    2005-01-01

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality.

  10. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  11. Macro-modelling of a double-gimballed electrostatic torsional micromirror

    NASA Astrophysics Data System (ADS)

    Zhou, Guangya; Tay, Francis E. H.; Chau, Fook Siong

    2003-09-01

    This paper presents the development of a reduced-order macro-model for the double-gimballed electrostatic torsional micromirror using the hierarchical circuit-based approach. The proposed macro-model permits extremely fast simulation while providing nearly FEM accuracy. The macro-model is coded in MAST analog hardware description language (AHDL), and the simulations are implemented in the SABERTM simulator. Both the static and dynamic behaviour of the double-gimballed electrostatic torsional micromirror have been investigated. The dc and frequency analysis results obtained by the proposed macro-model are in good agreement with CoventorWareTM finite element analysis results. Based on the macro-model we developed, system-level simulation of a closed-loop controlled double-gimballed torsional micromirror is also performed. Decentralized PID controllers are proposed for the control of the micromirror. A sequential-loop-closing method is used for tuning the multiple control loops during the simulation. After tuning, the closed-loop controlled double-gimballed torsional micromirror demonstrates an improved transient performance and satisfactory disturbance rejection ability.

  12. Two-dimensional models of threshold voltage and subthreshold current for symmetrical double-material double-gate strained Si MOSFETs

    NASA Astrophysics Data System (ADS)

    Yan-hui, Xin; Sheng, Yuan; Ming-tang, Liu; Hong-xia, Liu; He-cai, Yuan

    2016-03-01

    The two-dimensional models for symmetrical double-material double-gate (DM-DG) strained Si (s-Si) metal-oxide semiconductor field effect transistors (MOSFETs) are presented. The surface potential and the surface electric field expressions have been obtained by solving Poisson’s equation. The models of threshold voltage and subthreshold current are obtained based on the surface potential expression. The surface potential and the surface electric field are compared with those of single-material double-gate (SM-DG) MOSFETs. The effects of different device parameters on the threshold voltage and the subthreshold current are demonstrated. The analytical models give deep insight into the device parameters design. The analytical results obtained from the proposed models show good matching with the simulation results using DESSIS. Project supported by the National Natural Science Foundation of China (Grant Nos. 61376099, 11235008, and 61205003).

  13. Neutrinoless double beta nuclear matrix elements around mass 80 in the nuclear shell-model

    NASA Astrophysics Data System (ADS)

    Yoshinaga, N.; Higashiyama, K.; Taguchi, D.; Teruya, E.

    2015-05-01

    The observation of the neutrinoless double-beta decay can determine whether the neutrino is a Majorana particle or not. For theoretical nuclear physics it is particularly important to estimate three types of matrix elements, namely Fermi (F), Gamow-Teller (GT), and tensor (T) matrix elements. In this paper, we carry out shell-model calculations and also pair-truncated shell-model calculations to check the model dependence in the case of mass A=82 nuclei.

  14. Period-doubling bifurcation and high-order resonances in RR Lyrae hydrodynamical models

    NASA Astrophysics Data System (ADS)

    Kolláth, Z.; Molnár, L.; Szabó, R.

    2011-06-01

    We investigated period doubling, a well-known phenomenon in dynamical systems, for the first time in RR Lyrae models. These studies provide theoretical background for the recent discovery of period doubling in some Blazhko RR Lyrae stars with the Kepler space telescope. Since period doubling has been observed only in Blazhko-modulated stars so far, the phenomenon can help in understanding the modulation as well. Utilizing the Florida-Budapest turbulent convective hydrodynamical code, we have identified the phenomenon in both radiative and convective models. A period-doubling cascade was also followed up to an eight-period solution, confirming that destabilization of the limit cycle is indeed the underlying phenomenon. Floquet stability roots were calculated to investigate the possible causes and occurrences of the phenomenon. A two-dimensional diagnostic diagram was constructed to illustrate the various resonances between the fundamental mode and the different overtones. Combining the two tools, we confirmed that the period-doubling instability is caused by a 9:2 resonance between the ninth overtone and the fundamental mode. Destabilization of the limit cycle by a resonance of a high-order mode is possible because the overtone is a strange mode. The resonance is found to be strong enough to shift the period of overtone by up to 10 per cent. Our investigations suggest that a more complex interplay of radial (and presumably non-radial) modes could happen in RR Lyrae stars that might have connections with the Blazhko effect as well.

  15. Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-08-01

    Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.

  16. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. P.; Dixon, R. L.; Samei, Ehsan

    2015-03-01

    Among the various metrics that quantify radiation dose in computed tomography (CT), organ dose is one of the most representative quantities reflecting patient-specific radiation burden.1 Accurate estimation of organ dose requires one to effectively model the patient anatomy and the irradiation field. As illustrated in previous studies, the patient anatomy factor can be modeled using a library of computational phantoms with representative body habitus.2 However, the modeling of irradiation field can be practically challenging, especially for CT exams performed with tube current modulation. The central challenge is to effectively quantify the scatter irradiation field created by the dynamic change of tube current. In this study, we present a convolution-based technique to effectively quantify the primary and scatter irradiation field for TCM examinations. The organ dose for a given clinical patient can then be rapidly determined using the convolution-based method, a patient-matching technique, and a library of computational phantoms. 58 adult patients were included in this study (age range: 18-70 y.o., weight range: 60-180 kg). One computational phantom was created based on the clinical images of each patient. Each patient was optimally matched against one of the remaining 57 computational phantoms using a leave-one-out strategy. For each computational phantom, the organ dose coefficients (CTDIvol-normalized organ dose) under fixed tube current were simulated using a validated Monte Carlo simulation program. Such organ dose coefficients were multiplied by a scaling factor, (CTDIvol )organ, convolution that quantifies the regional irradiation field. The convolution-based organ dose was compared with the organ dose simulated from Monte Carlo program with TCM profiles explicitly modeled on the original phantom created based on patient images. The estimation error was within 10% across all organs and modulation profiles for abdominopelvic examination. This strategy

  17. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  18. Creating a Double-Spring Model to Teach Chromosome Movement during Mitosis & Meiosis

    ERIC Educational Resources Information Center

    Luo, Peigao

    2012-01-01

    The comprehension of chromosome movement during mitosis and meiosis is essential for understanding genetic transmission, but students often find this process difficult to grasp in a classroom setting. I propose a "double-spring model" that incorporates a physical demonstration and can be used as a teaching tool to help students understand this…

  19. Double Higgs production in the Two Higgs Doublet Model at the linear collider

    SciTech Connect

    Arhrib, Abdesslam; Benbrik, Rachid; Chiang, C.-W.

    2008-04-21

    We study double Higgs-strahlung production at the future Linear Collider in the framework of the Two Higgs Doublet Models through the following channels: e{sup +}e{sup -}{yields}{phi}{sub i}{phi}{sub j}Z, {phi}{sub i} = h deg., H deg., A deg. All these processes are sensitive to triple Higgs couplings. Hence observations of them provide information on the triple Higgs couplings that help reconstructing the scalar potential. We discuss also the double Higgs-strahlung e{sup +}e{sup -}{yields}h deg. h deg. Z in the decoupling limit where h deg. mimics the SM Higgs boson.

  20. Ergodic Transition in a Simple Model of the Continuous Double Auction

    PubMed Central

    Radivojević, Tijana; Anselmi, Jonatha; Scalas, Enrico

    2014-01-01

    We study a phenomenological model for the continuous double auction, whose aggregate order process is equivalent to two independent queues. The continuous double auction defines a continuous-time random walk for trade prices. The conditions for ergodicity of the auction are derived and, as a consequence, three possible regimes in the behavior of prices and logarithmic returns are observed. In the ergodic regime, prices are unstable and one can observe a heteroskedastic behavior in the logarithmic returns. On the contrary, non-ergodicity triggers stability of prices, even if two different regimes can be seen. PMID:24558377

  1. SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters

    NASA Astrophysics Data System (ADS)

    Hui, Kerwin; Chai, Jeng-Da

    2016-01-01

    By incorporating the nonempirical strongly constrained and appropriately normed (SCAN) semilocal density functional [J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction problems and noncovalent interactions. In particular, SCAN0-2, which includes about 79% of Hartree-Fock exchange and 50% of second-order Møller-Plesset correlation, is shown to be reliably accurate for a very diverse range of applications, such as thermochemistry, kinetics, noncovalent interactions, and self-interaction problems.

  2. Modelling and control of double-cone dielectric elastomer actuator

    NASA Astrophysics Data System (ADS)

    Branz, F.; Francesconi, A.

    2016-09-01

    Among various dielectric elastomer devices, cone actuators are of large interest for their multi-degree-of-freedom design. These objects combine the common advantages of dielectric elastomers (i.e. solid-state actuation, self-sensing capability, high conversion efficiency, light weight and low cost) with the possibility to actuate more than one degree of freedom in a single device. The potential applications of this feature in robotics are huge, making cone actuators very attractive. This work focuses on rotational degrees of freedom to complete existing literature and improve the understanding of such aspect. Simple tools are presented for the performance prediction of the device: finite element method simulations and interpolating relations have been used to assess the actuator steady-state behaviour in terms of torque and rotation as a function of geometric parameters. Results are interpolated by fit relations accounting for all the relevant parameters. The obtained data are validated through comparison with experimental results: steady-state torque and rotation are determined at a given high voltage actuation. In addition, the transient response to step input has been measured and, as a result, the voltage-to-torque and the voltage-to-rotation transfer functions are obtained. Experimental data are collected and used to validate the prediction capability of the transfer function in terms of time response to step input and frequency response. The developed static and dynamic models have been employed to implement a feedback compensator that controls the device motion; the simulated behaviour is compared to experimental data, resulting in a maximum prediction error of 7.5%.

  3. Convolution-based estimation of organ dose in tube current modulated CT

    PubMed Central

    Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan

    2016-01-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients (hOrgan) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate (CTDIvol)organ, convolution values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying (CTDIvol)organ, convolution with the organ dose coefficients (hOrgan). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the

  4. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  5. A diabatic state model for double proton transfer in hydrogen bonded complexes.

    PubMed

    McKenzie, Ross H

    2014-09-14

    Four diabatic states are used to construct a simple model for double proton transfer in hydrogen bonded complexes. Key parameters in the model are the proton donor-acceptor separation R and the ratio, D1/D2, between the proton affinity of a donor with one and two protons. Depending on the values of these two parameters the model describes four qualitatively different ground state potential energy surfaces, having zero, one, two, or four saddle points. Only for the latter are there four stable tautomers. In the limit D2 = D1 the model reduces to two decoupled hydrogen bonds. As R decreases a transition can occur from a synchronous concerted to an asynchronous concerted to a sequential mechanism for double proton transfer.

  6. Double Higgs production at LHC, see-saw type-II and Georgi-Machacek model

    SciTech Connect

    Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.

    2015-03-15

    The double Higgs production in the models with isospin-triplet scalars is studied. It is shown that in the see-saw type-II model, the mode with an intermediate heavy scalar, pp → H + X → 2h + X, may have the cross section that is comparable with that in the Standard Model. In the Georgi-Machacek model, this cross section could be much larger than in the Standard Model because the vacuum expectation value of the triplet can be large.

  7. Delta function convolution method (DFCM) for fluorescence decay experiments

    NASA Astrophysics Data System (ADS)

    Zuker, M.; Szabo, A. G.; Bramall, L.; Krajcarski, D. T.; Selinger, B.

    1985-01-01

    A rigorous and convenient method of correcting for the wavelength variation of the instrument response function in time correlated photon counting fluorescence decay measurements is described. The method involves convolution of a modified functional form F˜s of the physical model with a reference data set measured under identical conditions as the measurement of the sample. The method is completely general in that an appropriate functional form may be found for any physical model of the excited state decay process. The modified function includes a term which is a Dirac delta function and terms which give the correct decay times and preexponential values in which one is interested. None of the data is altered in any way, permitting correct statistical analysis of the fitting. The method is readily adaptable to standard deconvolution procedures. The paper describes the theory and application of the method together with fluorescence decay results obtained from measurements of a number of different samples including diphenylhexatriene, myoglobin, hemoglobin, 4', 6-diamidine-2-phenylindole (DAPI), and lysine-trytophan-lysine.

  8. HEp-2 Cell Image Classification with Deep Convolutional Neural Networks.

    PubMed

    Gao, Zhimin; Wang, Lei; Zhou, Luping; Zhang, Jianjia

    2016-02-08

    Efficient Human Epithelial-2 (HEp-2) cell image classification can facilitate the diagnosis of many autoimmune diseases. This paper proposes an automatic framework for this classification task, by utilizing the deep convolutional neural networks (CNNs) which have recently attracted intensive attention in visual recognition. In addition to describing the proposed classification framework, this paper elaborates several interesting observations and findings obtained by our investigation. They include the important factors that impact network design and training, the role of rotation-based data augmentation for cell images, the effectiveness of cell image masks for classification, and the adaptability of the CNN-based classification system across different datasets. Extensive experimental study is conducted to verify the above findings and compares the proposed framework with the well-established image classification models in the literature. The results on benchmark datasets demonstrate that i) the proposed framework can effectively outperform existing models by properly applying data augmentation; ii) our CNN-based framework has excellent adaptability across different datasets, which is highly desirable for cell image classification under varying laboratory settings. Our system is ranked high in the cell image classification competition hosted by ICPR 2014.

  9. Extended Holography: Double-Trace Deformation and Brane-Induced Gravity Models

    NASA Astrophysics Data System (ADS)

    Barvinsky, A. O.

    2017-03-01

    We put forward a conjecture that for a special class of models - models of the double-trace deformation and brane-induced gravity types - the principle of holographic dualitiy can be extended beyond conformal invariance and anti-de Sitter (AdS) isometry. Such an extension is based on a special relation between functional determinants of the operators acting in the bulk and on the boundary.

  10. South Asian summer monsoon variability in a model with doubled atmospheric carbon dioxide concentration

    SciTech Connect

    Meehl, G.A.; Washington, W.M. )

    1993-05-21

    Doubled atmospheric carbon dioxide concentration in a global coupled ocean-atmosphere climate model produced increased surface temperatures and evaporation and greater mean precipitation in the south Asian summer monsoon region. As a partial consequence, interannual variability of area-averaged monsoon rainfall was enhanced. Consistent with the climate sensitivity results from the model, observations showed a trend of increased interannual variability of Indian monsoon precipitation associated with warmer land and ocean temperatures in the monsoon region. 26 refs., 3 figs., 1 tab.

  11. Parallel double-plate capacitive proximity sensor modelling based on effective theory

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zhu, Haiye; Wang, Wenyu; Gong, Yu

    2014-02-01

    A semi-analytical model for a double-plate capacitive proximity sensor is presented according to the effective theory. Three physical models are established to derive the final equation of the sensor. Measured data are used to determine the coefficients. The final equation is verified by using measured data. The average relative error of the calculated and the measured sensor capacitance is less than 7.5%. The equation can be used to provide guidance to engineering design of the proximity sensors.

  12. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    NASA Astrophysics Data System (ADS)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  13. Modeling sorption of divalent metal cations on hydrous manganese oxide using the diffuse double layer model

    USGS Publications Warehouse

    Tonkin, J.W.; Balistrieri, L.S.; Murray, J.W.

    2004-01-01

    Manganese oxides are important scavengers of trace metals and other contaminants in the environment. The inclusion of Mn oxides in predictive models, however, has been difficult due to the lack of a comprehensive set of sorption reactions consistent with a given surface complexation model (SCM), and the discrepancies between published sorption data and predictions using the available models. The authors have compiled a set of surface complexation reactions for synthetic hydrous Mn oxide (HMO) using a two surface site model and the diffuse double layer SCM which complements databases developed for hydrous Fe (III) oxide, goethite and crystalline Al oxide. This compilation encompasses a range of data observed in the literature for the complex HMO surface and provides an error envelope for predictions not well defined by fitting parameters for single or limited data sets. Data describing surface characteristics and cation sorption were compiled from the literature for the synthetic HMO phases birnessite, vernadite and ??-MnO2. A specific surface area of 746 m2g-1 and a surface site density of 2.1 mmol g-1 were determined from crystallographic data and considered fixed parameters in the model. Potentiometric titration data sets were adjusted to a pH1EP value of 2.2. Two site types (???XOH and ???YOH) were used. The fraction of total sites attributed to ???XOH (??) and pKa2 were optimized for each of 7 published potentiometric titration data sets using the computer program FITEQL3.2. pKa2 values of 2.35??0.077 (???XOH) and 6.06??0.040 (???YOH) were determined at the 95% confidence level. The calculated average ?? value was 0.64, with high and low values ranging from 1.0 to 0.24, respectively. pKa2 and ?? values and published cation sorption data were used subsequently to determine equilibrium surface complexation constants for Ba2+, Ca2+, Cd 2+, Co2+, Cu2+, Mg2+, Mn 2+, Ni2+, Pb2+, Sr2+ and Zn 2+. In addition, average model parameters were used to predict additional

  14. Theoretical modeling of the dynamics of a semiconductor laser subject to double-reflector optical feedback

    NASA Astrophysics Data System (ADS)

    Bakry, A.; Abdulrhmann, S.; Ahmed, M.

    2016-06-01

    We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.

  15. A pre-clinical model of double versus single unit unrelated cord blood transplantation

    PubMed Central

    Georges, George E.; Lesnikov, Vladimir; Baran, Szczepan W.; Aragon, Anna; Lesnikova, Marina; Jordan, Robert; Yang, Ya-Ju Laura; Yunusov, Murad Y.; Zellmer, Eustacia; Heimfeld, Shelly; Venkataraman, Gopalakrishnan M.; Harkey, Michael A.; Graves, Scott S.; Storb, Rainer; Storer, Barry E.; Nash, Richard A.

    2010-01-01

    Cord blood transplantation (CBT) with units containing total nucleated cell (TNC) dose >2.5×107/kg is associated with improved engraftment and decreased transplant-related mortality. For many adults no single cord blood units are available that meet the cell dose requirements. We developed a dog model of CBT to evaluate approaches to overcome the problem of low cell dose cord blood units. This study primarily compared double- versus single-unit CBT. Unrelated dogs were bred and cord blood units were harvested. We identified unrelated recipients that were dog leukocyte antigen (DLA)-88 (class I) and DLA-DRB1 (class II) allele-matched with cryopreserved units. Each unit contained ≤ 1.7×107 TNC/kg. Recipients were given 9.2 Gy total body irradiation and DLA-matched unrelated cord blood with post-grafting cyclosporine and mycophenolate mofetil. After double-unit CBT, 5 dogs engrafted and 4 survived long term with one dominant engrafting unit and prompt immune reconstitution. In contrast, 0 of 5 dogs given single-unit CBT survived beyond 105 days (p=0.03, log-rank test); neutrophil and platelet recovery was delayed (both p=0.005) and recipients developed fatal infections. This new large animal model showed that outcomes were improved after double-unit compared to single-unit CBT. After double-unit CBT, the non-engrafted unit facilitates engraftment of the dominant unit. PMID:20304085

  16. Cascading failures coupled model of interdependent double layered public transit network

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Fu, Bai-Bai; Li, Shu-Bin

    2016-06-01

    Taking urban public transit network as research perspective, this work introduces the influences of adjacent stations on definition of station initial load, the connected edge transit capacity, and the coupled capacity to modify traditional load-capacity cascading failures (CFs) model. Furthermore, we consider the coupled effect of lower layered public transit network on the CFs of upper layered public transit network, and construct CFs coupled model of double layered public transit network with “interdependent relationship”. Finally, taking Jinan city’s public transit network as example, we give the dynamics simulation analysis of CFs under different control parameters based on measurement indicator of station cascading failures ratio (abbreviated as CF) and the scale of time-step cascading failures (abbreviated as TCFl), get the influencing characteristics of various control parameters, and verify the feasibility of CFs coupled model of double layered public transit network.

  17. Critical and crossover behavior in the double-Gaussian model on a lattice

    NASA Astrophysics Data System (ADS)

    Baker, George A., Jr.; Bishop, A. R.; Fesser, K.; Beale, Paul D.; Krumhansl, J. A.

    1982-09-01

    The double-Gaussian model, as recently introduced by Baker and Bishop, is studied in the context of a lattice-dynamics Hamiltonian belonging to the familiar φ4 class. Advantage is taken of the partition-function factorability (into Ising and Gaussian components) to place bounds on the Ising-class critical temperature for various lattice dimensions and all degrees of displaciveness in the bare Hamiltonian. Further, a simple criterion for a noncritical and nonuniversal crossover from order-disorder to Gaussian behavior is evaluated in numerical detail. In one and two dimensions these critical and crossover properties are compared with predictions based on real-space decimation renormalization-group flows, as previously exploited in the φ4 model by Beale et al. The double-Gaussian model again introduces some unique analytical advantages.

  18. Critical and crossover behavior in the double Gaussian model on a lattice

    SciTech Connect

    Baker, G.A. Jr.; Bishop, A.R.; Fesser, K.; Beale, P.D.; Krumhansl, J.A.

    1982-09-01

    The-double-Gaussian model, as recently introduced by Baker and Bishop, is studied in the context of a lattice-dynamics Hamiltonian belonging to the familiar phi/sup 4/ class. Advantage is taken of the partition-function factorability (into Ising and Gaussian components) to place bounds on the Ising-class critical temperature for various lattice dimensions and all degrees of displaciveness in the bare Hamiltonian. Further, a simple criterion for a noncritical and nonuniversal crossover from order-disorder to Gaussian behavior is evaluated in numerical detail. In one and two dimensions these critical and crossover properties are compared with predictions based on real-space decimation renormalization-group flows, as previously exploited in the phi/sup 4/ model by Beale et al. The double-Gaussian model again introduces some unique analytical advantages.

  19. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  20. Study of multispectral convolution scatter correction in high resolution PET

    SciTech Connect

    Yao, R.; Lecomte, R.; Bentourkia, M.

    1996-12-31

    PET images acquired with a high resolution scanner based on arrays of small discrete detectors are obtained at the cost of low sensitivity and increased detector scatter. It has been postulated that these limitations can be overcome by using enlarged discrimination windows to include more low energy events and by developing more efficient energy-dependent methods to correct for scatter. In this work, we investigate one such method based on the frame-by-frame scatter correction of multispectral data. Images acquired in the conventional, broad and multispectral window modes were processed by the stationary and nonstationary consecutive convolution scatter correction methods. Broad and multispectral window acquisition with a low energy threshold of 129 keV improved system sensitivity by up to 75% relative to conventional window with a {approximately}350 keV threshold. The degradation of image quality due to the added scatter events can almost be fully recovered by the subtraction-restoration scatter correction. The multispectral method was found to be more sensitive to the nonstationarity of scatter and its performance was not as good as that of the broad window. It is concluded that new scatter degradation models and correction methods need to be established to fully take advantage of multispectral data.

  1. Fully automated quantitative cephalometry using convolutional neural networks.

    PubMed

    Arık, Sercan Ö; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Quantitative cephalometry plays an essential role in clinical diagnosis, treatment, and surgery. Development of fully automated techniques for these procedures is important to enable consistently accurate computerized analyses. We study the application of deep convolutional neural networks (CNNs) for fully automated quantitative cephalometry for the first time. The proposed framework utilizes CNNs for detection of landmarks that describe the anatomy of the depicted patient and yield quantitative estimation of pathologies in the jaws and skull base regions. We use a publicly available cephalometric x-ray image dataset to train CNNs for recognition of landmark appearance patterns. CNNs are trained to output probabilistic estimations of different landmark locations, which are combined using a shape-based model. We evaluate the overall framework on the test set and compare with other proposed techniques. We use the estimated landmark locations to assess anatomically relevant measurements and classify them into different anatomical types. Overall, our results demonstrate high anatomical landmark detection accuracy ([Formula: see text] to 2% higher success detection rate for a 2-mm range compared with the top benchmarks in the literature) and high anatomical type classification accuracy ([Formula: see text] average classification accuracy for test set). We demonstrate that CNNs, which merely input raw image patches, are promising for accurate quantitative cephalometry.

  2. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  3. Classification of breast cancer cytological specimen using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  4. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    PubMed

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-03-22

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture.

  5. Convolutional neural network approach for buried target recognition in FL-LWIR imagery

    NASA Astrophysics Data System (ADS)

    Stone, K.; Keller, J. M.

    2014-05-01

    A convolutional neural network (CNN) approach to recognition of buried explosive hazards in forward-looking long-wave infrared (FL-LWIR) imagery is presented. The convolutional filters in the first layer of the network are learned in the frequency domain, making enforcement of zero-phase and zero-dc response characteristics much easier. The spatial domain representations of the filters are forced to have unit l2 norm, and penalty terms are added to the online gradient descent update to encourage orthonormality among the convolutional filters, as well smooth first and second order derivatives in the spatial domain. The impact of these modifications on the generalization performance of the CNN model is investigated. The CNN approach is compared to a second recognition algorithm utilizing shearlet and log-gabor decomposition of the image coupled with cell-structured feature extraction and support vector machine classification. Results are presented for multiple FL-LWIR data sets recently collected from US Army test sites. These data sets include vehicle position information allowing accurate transformation between image and world coordinates and realistic evaluation of detection and false alarm rates.

  6. A DNA double-strand break kinetic rejoining model based on the local effect model.

    PubMed

    Tommasino, F; Friedrich, T; Scholz, U; Taucher-Scholz, G; Durante, M; Scholz, M

    2013-11-01

    We report here on a DNA double-strand break (DSB) kinetic rejoining model applicable to a wide range of radiation qualities based on the DNA damage pattern predicted by the local effect model (LEM). In the LEM this pattern is derived from the SSB and DSB yields after photon irradiation in combination with an amorphous track structure approach. Together with the assumption of a giant-loop organization to describe the higher order chromatin structure this allows the definition of two different classes of DSB. These classes are defined by the level of clustering on a micrometer scale, i.e., "isolated DSB" (iDSB) are characterized by a single DSB in a giant loop and "clustered DSB" (cDSB) by two or more DSB in a loop. Clustered DSB are assumed to represent a more difficult challenge for the cell repair machinery compared to isolated DSB, and we thus hypothesize here that the fraction of isolated DSB can be identified with the fast component of rejoining, whereas clustered DSB are identified with the slow component of rejoining. The resulting predicted bi-exponential decay functions nicely reproduce the experimental curves of DSB rejoining over time obtained by means of gel electrophoresis elution techniques as reported by different labs, involving different cell types and a wide spectrum of radiation qualities. New experimental data are also presented aimed at investigating the effects of the same ion species accelerated at different energies. The results presented here further support the relevance of the proposed two classes of DSB as a basis for understanding cell response to ion irradiation. Importantly the density of DSB within DNA giant loops of around 2 Mbp size, i.e., on a micrometer scale, is identified as a key parameter for the description of radiation effectiveness.

  7. Inequalities and consequences of new convolutions for the fractional Fourier transform with Hermite weights

    NASA Astrophysics Data System (ADS)

    Anh, P. K.; Castro, L. P.; Thao, P. T.; Tuan, N. M.

    2017-01-01

    This paper presents new convolutions for the fractional Fourier transform which are somehow associated with the Hermite functions. Consequent inequalities and properties are derived for these convolutions, among which we emphasize two new types of Young's convolution inequalities. The results guarantee a general framework where the present convolutions are well-defined, allowing larger possibilities than the known ones for other convolutions. Furthermore, we exemplify the use of our convolutions by providing explicit solutions of some classes of integral equations which appear in engineering problems.

  8. Geodesic acoustic mode in anisotropic plasmas using double adiabatic model and gyro-kinetic equation

    SciTech Connect

    Ren, Haijun; Cao, Jintao

    2014-12-15

    Geodesic acoustic mode in anisotropic tokamak plasmas is theoretically analyzed by using double adiabatic model and gyro-kinetic equation. The bi-Maxwellian distribution function for guiding-center ions is assumed to obtain a self-consistent form, yielding pressures satisfying the magnetohydrodynamic (MHD) anisotropic equilibrium condition. The double adiabatic model gives the dispersion relation of geodesic acoustic mode (GAM), which agrees well with the one derived from gyro-kinetic equation. The GAM frequency increases with the ratio of pressures, p{sub ⊥}/p{sub ∥}, and the Landau damping rate is dramatically decreased by p{sub ⊥}/p{sub ∥}. MHD result shows a low-frequency zonal flow existing for all p{sub ⊥}/p{sub ∥}, while according to the kinetic dispersion relation, no low-frequency branch exists for p{sub ⊥}/p{sub ∥}≳ 2.

  9. High correlation of double Debye model parameters in skin cancer detection.

    PubMed

    Truong, Bao C Q; Tuan, H D; Fitzgerald, Anthony J; Wallace, Vincent P; Nguyen, H T

    2014-01-01

    The double Debye model can be used to capture the dielectric response of human skin in terahertz regime due to high water content in the tissue. The increased water proportion is widely considered as a biomarker of carcinogenesis, which gives rise of using this model in skin cancer detection. Therefore, the goal of this paper is to provide a specific analysis of the double Debye parameters in terms of non-melanoma skin cancer classification. Pearson correlation is applied to investigate the sensitivity of these parameters and their combinations to the variation in tumor percentage of skin samples. The most sensitive parameters are then assessed by using the receiver operating characteristic (ROC) plot to confirm their potential of classifying tumor from normal skin. Our positive outcomes support further steps to clinical application of terahertz imaging in skin cancer delineation.

  10. Supernova Type Ia progenitors from merging double white dwarfs. Using a new population synthesis model

    NASA Astrophysics Data System (ADS)

    Toonen, S.; Nelemans, G.; Portegies Zwart, S.

    2012-10-01

    Context. The study of Type Ia supernovae (SNIa) has lead to greatly improved insights into many fields in astrophysics, e.g. cosmology, and also into the metal enrichment of the universe. Although a theoretical explanation of the origin of these events is still lacking, there is a general consensus that SNIa are caused by the thermonuclear explosions of carbon/oxygen white dwarfs with masses near the Chandrasekhar mass. Aims: We investigate the potential contribution to the supernova Type Ia rate from the population of merging double carbon-oxygen white dwarfs. We aim to develop a model that fits the observed SNIa progenitors as well as the observed close double white dwarf population. We differentiate between two scenarios for the common envelope (CE) evolution; the α-formalism based on the energy equation and the γ-formalism that is based on the angular momentum equation. In one model we apply the α-formalism throughout. In the second model the γ-formalism is applied, unless the binary contains a compact object or the CE is triggered by a tidal instability for which the α-formalism is used. Methods: The binary population synthesis code SeBa was used to evolve binary systems from the zero-age main sequence to the formation of double white dwarfs and subsequent mergers. SeBa has been thoroughly updated since the last publication of the content of the code. Results: The limited sample of observed double white dwarfs is better represented by the simulated population using the γ-formalism for the first CE phase than the α-formalism. For both CE formalisms, we find that although the morphology of the simulated delay time distribution matches that of the observations within the errors, the normalisation and time-integrated rate per stellar mass are a factor ~7-12 lower than observed. Furthermore, the characteristics of the simulated populations of merging double carbon-oxygen white dwarfs are discussed and put in the context of alternative SNIa models for merging

  11. Double Folding Potential of Different Interaction Models for 16O + 12C Elastic Scattering

    NASA Astrophysics Data System (ADS)

    Hamada, Sh.; Bondok, I.; Abdelmoatmed, M.

    2016-12-01

    The elastic scattering angular distributions for 16O + 12C nuclear system have been analyzed using double folding potential of different interaction models: CDM3Y1, CDM3Y6, DDM3Y1 and BDM3Y1. We have extracted the renormalization factor N r for the different concerned interaction models. Potential created by BDM3Y1 model of interaction has the shallowest depth which reflects the necessity to use higher renormalization factor. The experimental angular distributions for 16O + 12C nuclear system in the energy range 115.9-230 MeV exhibited unmistakable refractive features and rainbow phenomenon.

  12. On the vibration of double-walled carbon nanotubes using molecular structural and cylindrical shell models

    NASA Astrophysics Data System (ADS)

    Ansari, R.; Rouhi, S.; Aryayi, M.

    2016-01-01

    The vibrational behavior of double-walled carbon nanotubes is studied by the use of the molecular structural and cylindrical shell models. The spring elements are employed to model the van der Waals interaction. The effects of different parameters such as geometry, chirality, atomic structure and end constraint on the vibration of nanotubes are investigated. Besides, the results of two aforementioned approaches are compared. It is indicated that by increasing the nanotube side length and radius, the computationally efficient cylindrical shell model gives rational results.

  13. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  14. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  15. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  16. Dynamics of a stochastic SIS model with double epidemic diseases driven by Lévy jumps

    NASA Astrophysics Data System (ADS)

    Zhang, Xinhong; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir

    2017-04-01

    This paper is to investigate the dynamics of a stochastic SIS epidemic model with saturated incidence rate and double epidemic diseases which make the research more complex. The environment variability in this study is characterized by white noise and jump noise. Sufficient conditions for the extinction and persistence in the mean of two epidemic diseases are obtained. It is shown that the two diseases can coexist under appropriate conditions. Finally, numerical simulations are introduced to illustrate the results developed.

  17. An Inverse Model of Double Diffusive Convection in the Beaufort Sea

    DTIC Science & Technology

    2009-12-01

    Master’s Thesis 4. TITLE AND SUBTITLE An Inverse Model of Double Diffusive Convection in the Beaufort Sea 6. AUTHOR(S) Jeremiah E. Chaplin 5 ...convection and mixing within the homogeneous layers............................... 5 Figure 4. Ice tethered profiler system....................14...Figure 5 . Location of ITP 1-6.............................15 Figure 6. Temperature – Salinity plot for ITPs 1-6........18 Figure 7. Histogram of data

  18. Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel

    NASA Technical Reports Server (NTRS)

    Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.

    1989-01-01

    A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.

  19. Modeling of the double leakage and leakage spillage flows in axial flow compressors

    NASA Astrophysics Data System (ADS)

    Du, Hui; Yu, Xianjun; Liu, Baojie

    2014-04-01

    A model to predict the double leakage and tip leakage leading edge spillage flows was developed. This model was combined by a TLV trajectory model and a TLV diameter model and formed as a function of compressor one-dimensional design parameters, i.e. the compressor massflow coefficient, ϕ and compressor loading coefficient, Ψ, and some critical blade geometrical parameters, i.e. blade solidity, σ, stagger angle, β S , blade chord length, C, and blade pitch length, S. By using this model, the double leakage and tip leakage leading edge spillage flow could be predicted even at the compressor preliminary design process. Considering the leading edge spillage flow usually indicates the inception of spike-type stall, i.e. the compressor is a tip critical design, this model could also be used as a tool to choose the critical design parameters for designers. At last, some experimental data from literature was used to validate the model and the results proved that the model was reliable.

  20. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double-layer potential structure. A simple model is presented in which this current re-distribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double-layer potential. The flank charging may be represented as that of a nonlinear transmission. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a 1-d simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  1. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer (DL) in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double layer potential structure. A simple model is presented in which this current redistribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double layer potential. The flank charging may be represented as that of a nonlinear transmission line. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a one-dimensional simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  2. Internal flow numerical simulation of double-suction centrifugal pump using DES model

    NASA Astrophysics Data System (ADS)

    Zhou, P. J.; Wang, F. J.; Yang, M.

    2012-11-01

    It is a challenging task for the flow simulation for a double-suction centrifugal pump, because the wall effects are strong in this type of pumps. Detached-eddy simulation (DES), referred as a hybrid RANS-LES approach, has emerged recently as a potential compromise between RANS based turbulence models and Large Eddy Simulation. In this approach, the unsteady RANS model is employed in the boundary layer, while the LES treatment is applied to the separated region. In this paper, S-A DES method and SST k-ω DES method are applied to the numerical simulation for the 3D flow in whole passage of a double-suction centrifugal pump. The unsteady flow field including velocity and pressure distributions is obtained. The head and efficiency of the pump are predicted and compared with experimental results. According to the calculated results, S-A DES model is easy to control the partition of the simulation when using near wall grid with 30 < y+<300 control approach. It also has better performance on efficiency and accuracy than SST k - ω DES method. S-A DES method is more suitable for solving the unsteady flow in double-suction centrifugal pump. S-A DES method can capture more flow phenomenon than SST k - ω DES method. In addition, it can accurately predict the power performance under different flow conditions, and can reflect pressure fluctuation characteristics.

  3. Experiments and Modeling of Boric Acid Permeation through Double-Skinned Forward Osmosis Membranes.

    PubMed

    Luo, Lin; Zhou, Zhengzhong; Chung, Tai-Shung; Weber, Martin; Staudt, Claudia; Maletzko, Christian

    2016-07-19

    Boron removal is one of the great challenges in modern wastewater treatment, owing to the unique small size and fast diffusion rate of neutral boric acid molecules. As forward osmosis (FO) membranes with a single selective layer are insufficient to reject boron, double-skinned FO membranes with boron rejection up to 83.9% were specially designed for boron permeation studies. The superior boron rejection properties of double-skinned FO membranes were demonstrated by theoretical calculations, and verified by experiments. The double-skinned FO membrane was fabricated using a sulfonated polyphenylenesulfone (sPPSU) polymer as the hydrophilic substrate and polyamide as the selective layer material via interfacial polymerization on top and bottom surfaces. A strong agreement between experimental data and modeling results validates the membrane design and confirms the success of model prediction. The effects of key parameters on boron rejection, such as boron permeability of both selective layers and structure parameter, were also investigated in-depth with the mathematical modeling. This study may provide insights not only for boron removal from wastewater, but also open up the design of next generation FO membranes to eliminate low-rejection molecules in wider applications.

  4. Dystrophin and dysferlin double mutant mice: a novel model for rhabdomyosarcoma.

    PubMed

    Hosur, Vishnu; Kavirayani, Anoop; Riefler, Jennifer; Carney, Lisa M B; Lyons, Bonnie; Gott, Bruce; Cox, Gregory A; Shultz, Leonard D

    2012-05-01

    Although researchers have yet to establish a link between muscular dystrophy (MD) and sarcomas in human patients, literature suggests that the MD genes dystrophin and dysferlin act as tumor suppressor genes in mouse models of MD. For instance, dystrophin-deficient mdx and dysferlin-deficient A/J mice, models of human Duchenne MD and limb-girdle MD type 2B, respectively, develop mixed sarcomas with variable penetrance and latency. To further establish the correlation between MD and sarcoma development, and to test whether a combined deletion of dystrophin and dysferlin exacerbates MD and augments the incidence of sarcomas, we generated dystrophin and dysferlin double mutant mice (STOCK-Dysf(prmd)Dmd(mdx-5Cv)). Not surprisingly, the double mutant mice develop severe MD symptoms and, moreover, develop rhabdomyosarcoma (RMS) at an average age of 12 months, with an incidence of >90%. Histological and immunohistochemical analyses, using a panel of antibodies against skeletal muscle cell proteins, electron microscopy, cytogenetics, and molecular analysis reveal that the double mutant mice develop RMS. The present finding bolsters the correlation between MD and sarcomas, and provides a model not only to examine the cellular origins but also to identify mechanisms and signal transduction pathways triggering development of RMS.

  5. Two-parameter double-oscillator model of Mathews-Lakshmanan type: Series solutions and supersymmetric partners

    SciTech Connect

    Schulze-Halberg, Axel E-mail: xbataxel@gmail.com; Wang, Jie

    2015-07-15

    We obtain series solutions, the discrete spectrum, and supersymmetric partners for a quantum double-oscillator system. Its potential features a superposition of the one-parameter Mathews-Lakshmanan interaction and a one-parameter harmonic or inverse harmonic oscillator contribution. Furthermore, our results are transferred to a generalized Pöschl-Teller model that is isospectral to the double-oscillator system.

  6. Comparison of the accuracy of the calibration model on the double and single integrating sphere systems

    NASA Astrophysics Data System (ADS)

    Singh, A.; Karsten, A.

    2011-06-01

    The accuracy of the calibration model for the single and double integrating sphere systems are compared for a white light system. A calibration model is created from a matrix of samples with known absorption and reduced scattering coefficients. In this instance the samples are made using different concentrations of intralipid and black ink. The total and diffuse transmittance and reflectance is measured on both setups and the accuracy of each model compared by evaluating the prediction errors of the calibration model for the different systems. Current results indicate that the single integrating sphere setup is more accurate than the double system method. This is based on the low prediction errors of the model for the single sphere system for a He-Ne laser as well as a white light source. The model still needs to be refined for more absorption factors. Tests on the prediction accuracies were then determined by extracting the optical properties of solid resin based phantoms for each system. When these properties of the phantoms were used as input to the modeling software excellent agreement between measured and simulated data was found for the single sphere systems.

  7. Mechanistic Modelling and Bayesian Inference Elucidates the Variable Dynamics of Double-Strand Break Repair.

    PubMed

    Woods, Mae L; Barnes, Chris P

    2016-10-01

    DNA double-strand breaks are lesions that form during metabolism, DNA replication and exposure to mutagens. When a double-strand break occurs one of a number of repair mechanisms is recruited, all of which have differing propensities for mutational events. Despite DNA repair being of crucial importance, the relative contribution of these mechanisms and their regulatory interactions remain to be fully elucidated. Understanding these mutational processes will have a profound impact on our knowledge of genomic instability, with implications across health, disease and evolution. Here we present a new method to model the combined activation of non-homologous end joining, single strand annealing and alternative end joining, following exposure to ionising radiation. We use Bayesian statistics to integrate eight biological data sets of double-strand break repair curves under varying genetic knockouts and confirm that our model is predictive by re-simulating and comparing to additional data. Analysis of the model suggests that there are at least three disjoint modes of repair, which we assign as fast, slow and intermediate. Our results show that when multiple data sets are combined, the rate for intermediate repair is variable amongst genetic knockouts. Further analysis suggests that the ratio between slow and intermediate repair depends on the presence or absence of DNA-PKcs and Ku70, which implies that non-homologous end joining and alternative end joining are not independent. Finally, we consider the proportion of double-strand breaks within each mechanism as a time series and predict activity as a function of repair rate. We outline how our insights can be directly tested using imaging and sequencing techniques and conclude that there is evidence of variable dynamics in alternative repair pathways. Our approach is an important step towards providing a unifying theoretical framework for the dynamics of DNA repair processes.

  8. Mechanistic Modelling and Bayesian Inference Elucidates the Variable Dynamics of Double-Strand Break Repair

    PubMed Central

    2016-01-01

    DNA double-strand breaks are lesions that form during metabolism, DNA replication and exposure to mutagens. When a double-strand break occurs one of a number of repair mechanisms is recruited, all of which have differing propensities for mutational events. Despite DNA repair being of crucial importance, the relative contribution of these mechanisms and their regulatory interactions remain to be fully elucidated. Understanding these mutational processes will have a profound impact on our knowledge of genomic instability, with implications across health, disease and evolution. Here we present a new method to model the combined activation of non-homologous end joining, single strand annealing and alternative end joining, following exposure to ionising radiation. We use Bayesian statistics to integrate eight biological data sets of double-strand break repair curves under varying genetic knockouts and confirm that our model is predictive by re-simulating and comparing to additional data. Analysis of the model suggests that there are at least three disjoint modes of repair, which we assign as fast, slow and intermediate. Our results show that when multiple data sets are combined, the rate for intermediate repair is variable amongst genetic knockouts. Further analysis suggests that the ratio between slow and intermediate repair depends on the presence or absence of DNA-PKcs and Ku70, which implies that non-homologous end joining and alternative end joining are not independent. Finally, we consider the proportion of double-strand breaks within each mechanism as a time series and predict activity as a function of repair rate. We outline how our insights can be directly tested using imaging and sequencing techniques and conclude that there is evidence of variable dynamics in alternative repair pathways. Our approach is an important step towards providing a unifying theoretical framework for the dynamics of DNA repair processes. PMID:27741226

  9. Modeling and experimental results of low-background extrinsic double-injection IR detector response

    NASA Astrophysics Data System (ADS)

    Zaletaev, N. B.; Filachev, A. M.; Ponomarenko, V. P.; Stafeev, V. I.

    2006-05-01

    Bias-dependent response of an extrinsic double-injection IR detector under irradiation from extrinsic and intrinsic responsivity spectral ranges was obtained analytically and through numerical modeling. The model includes the transient response and generation-recombination noise as well. It is shown that a great increase in current responsivity (by orders of magnitude) without essential change in detectivity can take place in the range of extrinsic responsivity for detectors on semiconductor materials with long-lifetime minority charge carriers if double-injection photodiodes are made on them instead photoconductive detectors. Field dependence of the lifetimes and mobilities of charge carriers essentially influences detector characteristics especially in the voltage range where the drift length of majority carriers is greater than the distance between the contacts. The model developed is in good agreement with experimental data obtained for n-Si:Cd, p-Ge:Au, and Ge:Hg diodes, as well as for diamond detectors of radiations. A BLIP-detection responsivity of about 2000 A/W (for a wavelength of 10 micrometers) for Ge:Hg diodes has been reached in a frequency range of 500 Hz under a background of 6 x 10 11 cm -2s -1 at a temperature of 20 K. Possibilities of optimization of detector performance are discussed. Extrinsic double-injection photodiodes and other detectors of radiations with internal gain based on double injection are reasonable to use in the systems liable to strong disturbance action, in particular to vibrations, because high responsivity can ensure higher resistance to interference.

  10. Vibro-acoustic modelling of aircraft double-walls with structural links using Statistical Energy Analysis

    NASA Astrophysics Data System (ADS)

    Campolina, Bruno L.

    The prediction of aircraft interior noise involves the vibroacoustic modelling of the fuselage with noise control treatments. This structure is composed of a stiffened metallic or composite panel, lined with a thermal and acoustic insulation layer (glass wool), and structurally connected via vibration isolators to a commercial lining panel (trim). The goal of this work aims at tailoring the noise control treatments taking design constraints such as weight and space optimization into account. For this purpose, a representative aircraft double-wall is modelled using the Statistical Energy Analysis (SEA) method. Laboratory excitations such as diffuse acoustic field and point force are addressed and trends are derived for applications under in-flight conditions, considering turbulent boundary layer excitation. The effect of the porous layer compression is firstly addressed. In aeronautical applications, compression can result from the installation of equipment and cables. It is studied analytically and experimentally, using a single panel and a fibrous uniformly compressed over 100% of its surface. When compression increases, a degradation of the transmission loss up to 5 dB for a 50% compression of the porous thickness is observed mainly in the mid-frequency range (around 800 Hz). However, for realistic cases, the effect should be reduced since the compression rate is lower and compression occurs locally. Then the transmission through structural connections between panels is addressed using a four-pole approach that links the force-velocity pair at each side of the connection. The modelling integrates experimental dynamic stiffness of isolators, derived using an adapted test rig. The structural transmission is then experimentally validated and included in the double-wall SEA model as an equivalent coupling loss factor (CLF) between panels. The tested structures being flat, only axial transmission is addressed. Finally, the dominant sound transmission paths are

  11. Double-stranded DNA organization in bacteriophage heads: An alternative toroid-based model

    SciTech Connect

    Hud, N.V.

    1995-10-01

    Studies of the organization of double-stranded DNA within bacteriophage heads during the past four decades have produced a wealth of data. However, despite the presentation of numerous models, the true organization of DNA within phage heads remains unresolved. The observations of toroidal DNA structures in electron micrographs of phage lysates have long been cited as support for the organization of DNA in a spool-like fashion. This particular model, like all other models, has not been found to be consistent with all available data. Recently, the authors proposed that DNA within toroidal condensates produced in vitro is organized in a manner significantly different from that suggested by the spool model. This new toroid model has allowed the development of an alternative model for DNA organization within bacteriophage heads that is consistent with a wide range of biophysical data. Here the authors propose that bacteriophage DNA is packaged in a toroid that is folded into a highly compact structure.

  12. Microsecond kinetics in model single- and double-stranded amylose polymers.

    PubMed

    Sattelle, Benedict M; Almond, Andrew

    2014-05-07

    Amylose, a component of starch with increasing biotechnological significance, is a linear glucose polysaccharide that self-organizes into single- and double-helical assemblies. Starch granule packing, gelation and inclusion-complex formation result from finely balanced macromolecular kinetics that have eluded precise experimental quantification. Here, graphics processing unit (GPU) accelerated multi-microsecond aqueous simulations are employed to explore conformational kinetics in model single- and double-stranded amylose. The all-atom dynamics concur with prior X-ray and NMR data while surprising and previously overlooked microsecond helix-coil, glycosidic linkage and pyranose ring exchange are hypothesized. In a dodecasaccharide, single-helical collapse was correlated with linkages and rings transitioning from their expected syn and (4)C1 chair conformers. The associated microsecond exchange rates were dependent on proximity to the termini and chain length (comparing hexa- and trisaccharides), while kinetic features of dodecasaccharide linkage and ring flexing are proposed to be a good model for polymers. Similar length double-helices were stable on microsecond timescales but the parallel configuration was sturdier than the antiparallel equivalent. In both, tertiary organization restricted local chain dynamics, implying that simulations of single amylose strands cannot be extrapolated to dimers. Unbiased multi-microsecond simulations of amylose are proposed as a valuable route to probing macromolecular kinetics in water, assessing the impact of chemical modifications on helical stability and accelerating the development of new biotechnologies.

  13. Single-trial EEG RSVP classification using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William

    2016-05-01

    Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.

  14. Double layer effects in a model of proton discharge on charged electrodes

    PubMed Central

    2014-01-01

    Summary We report first results on double layer effects on proton discharge reactions from aqueous solutions to charged platinum electrodes. We have extended a recently developed combined proton transfer/proton discharge model on the basis of empirical valence bond theory to include specifically adsorbed sodium cations and chloride anions. For each of four studied systems 800–1000 trajectories of a discharging proton were integrated by molecular dynamics simulations until discharge occurred. The results show significant influences of ion presence on the average behavior of protons prior to the discharge event. Rationalization of the observed behavior cannot be based solely on the electrochemical potential (or surface charge) but needs to resort to the molecular details of the double layer structure. PMID:25161833

  15. A double layer model for solar X-ray and microwave pulsations

    NASA Technical Reports Server (NTRS)

    Tapping, K. F.

    1986-01-01

    The wide range of wavelengths over which quasi-periodic pulsations have been observed suggests that the mechanism causing them acts upon the supply of high energy electrons driving the emission processes. A model is described which is based upon the radial shrinkage of a magnetic flux tube. The concentration of the current, along with the reduction in the number of available charge carriers, can rise to a condition where the current demand exceeds the capacity of the thermal electrons. Driven by the large inductance of the external current circuit, an instability takes place in the tube throat, resulting in the formation of a potential double layer, which then accelerates electrons and ions to MeV energies. The double layer can be unstable, collapsing and reforming repeatedly. The resulting pulsed particle beams give rise to pulsating emission which are observed at radio and X-ray wavelengths.

  16. Kinetic model for an auroral double layer that spans many gravitational scale heights

    SciTech Connect

    Robertson, Scott

    2014-12-15

    The electrostatic potential profile and the particle densities of a simplified auroral double layer are found using a relaxation method to solve Poisson's equation in one dimension. The electron and ion distribution functions for the ionosphere and magnetosphere are specified at the boundaries, and the particle densities are found from a collisionless kinetic model. The ion distribution function includes the gravitational potential energy; hence, the unperturbed ionospheric plasma has a density gradient. The plasma potential at the upper boundary is given a large negative value to accelerate electrons downward. The solutions for a wide range of dimensionless parameters show that the double layer forms just above a critical altitude that occurs approximately where the ionospheric density has fallen to the magnetospheric density. Below this altitude, the ionospheric ions are gravitationally confined and have the expected scale height for quasineutral plasma in gravity.

  17. Simulation of double layers in a model auroral circuit with nonlinear impedance

    NASA Technical Reports Server (NTRS)

    Smith, R. A.

    1986-01-01

    A reduced circuit description of the U-shaped potential structure of a discrete auroral arc, consisting of the flank transmission line plus parallel-electric-field region, is used to provide the boundary condition for one-dimensional simulations of the double-layer evolution. The model yields asymptotic scalings of the double-layer potential, as a function of an anomalous transport coefficient alpha and of the perpendicular length scale l(a) of the arc. The arc potential phi(DL) scales approximately linearly with alpha, and for alpha fixed phi (DL) about l(a) to the z power. Using parameters appropriate to the auroral zone acceleration region, potentials of phi (DPL) 10 kV scale to projected ionospheric dimensions of about 1 km, with power flows of the order of magnitude of substorm dissipation rates.

  18. Classical mapping for Hubbard operators: Application to the double-Anderson model

    SciTech Connect

    Li, Bin; Miller, William H.; Levy, Tal J.; Rabani, Eran

    2014-05-28

    A classical Cartesian mapping for Hubbard operators is developed to describe the nonequilibrium transport of an open quantum system with many electrons. The mapping of the Hubbard operators representing the many-body Hamiltonian is derived by using analogies from classical mappings of boson creation and annihilation operators vis-à-vis a coherent state representation. The approach provides qualitative results for a double quantum dot array (double Anderson impurity model) coupled to fermionic leads for a range of bias voltages, Coulomb couplings, and hopping terms. While the width and height of the conduction peaks show deviations from the master equation approach considered to be accurate in the limit of weak system-leads couplings and high temperatures, the Hubbard mapping captures all transport channels involving transition between many electron states, some of which are not captured by approximate nonequilibrium Green function closures.

  19. Computer modelling of double doped SrAl2O4 for phosphor applications

    NASA Astrophysics Data System (ADS)

    Jackson, R. A.; Kavanagh, L. A.; Snelgrove, R. A.

    2017-02-01

    This paper describes a modelling study of SrAl2O4, which has applications as a phosphor material when doped with Eu2+ and Dy3+ ions. The procedure for modelling the pure and doped material is described and then results are presented for the single and double doped material. Solution energies are calculated and used to deduce dopant location, and mean field calculations are used to predict the effect of doping on crystal lattice parameter. Possible charge compensation mechanisms for Dy3+ ions substituting at a Sr2+ site are discussed.

  20. Fabrication of double-walled section models of the ITER vacuum vessel

    SciTech Connect

    Koizumi, K.; Kanamori, N.; Nakahira, M.; Itoh, Y.; Horie, M.; Tada, E.; Shimamoto, S.

    1995-12-31

    Trial fabrication of double-walled section models has been performed at Japan Atomic Energy Research Institute (JAERI) for the construction of ITER vacuum vessel. By employing TIG (Tungsten-arc Inert Gas) welding and EB (Electron Beam) welding, for each model, two full-scaled section models of 7.5 {degree} toroidal sector in the curved section at the bottom of vacuum vessel have been successfully fabricated with the final dimensional error of within {+-}5 mm to the nominal values. The sufficient technical database on the candidate fabrication procedures, welding distortion and dimensional stability of full-scaled models have been obtained through the fabrications. This paper describes the design and fabrication procedures of both full-scaled section models and the major results obtained through the fabrication.

  1. A modified double distribution lattice Boltzmann model for axisymmetric thermal flow

    NASA Astrophysics Data System (ADS)

    Wang, Zuo; Liu, Yan; Wang, Heng; Zhang, Jiazhong

    2017-04-01

    In this paper, a double distribution lattice Boltzmann model for axisymmetric thermal flow is proposed. In the model, the flow field is solved by a multi-relaxation-time lattice Boltzmann scheme while the temperature field by a newly proposed lattice-kinetic-based Boltzmann scheme. Chapman-Enskog analysis demonstrates that the axisymmetric energy equation in the cylindrical coordinate system can be recovered by the present lattice-kinetic-based Boltzmann scheme for temperature field. Numerical tests, including the thermal Hagen-Poiseuille flow and natural convection in a vertical annulus, have been carried out, and the results predicted by the present model agree well with the existing numerical data. Furthermore, the present model shows better numerical stability than the existing model.

  2. Ambient modal testing of a double-arch dam: the experimental campaign and model updating

    NASA Astrophysics Data System (ADS)

    García-Palacios, Jaime H.; Soria, José M.; Díaz, Iván M.; Tirado-Andrés, Francisco

    2016-09-01

    A finite element model updating of a double-curvature-arch dam (La Tajera, Spain) is carried out hereof using the modal parameters obtained from an operational modal analysis. That is, the system modal dampings, natural frequencies and mode shapes have been identified using output-only identification techniques under environmental loads (wind, vehicles). A finite element model of the dam-reservoir-foundation system was initially created. Then, a testing campaing was then carried out from the most significant test points using high-sensitivity accelerometers wirelessly synchronized. Afterwards, the model updating of the initial model was done using a Monte Carlo based approach in order to match it to the recorded dynamic behaviour. The updated model may be used within a structural health monitoring system for damage detection or, for instance, for the analysis of the seismic response of the arch dam- reservoir-foundation coupled system.

  3. A Study on Equivalent Circuit Model of High-Power Density Electric Double Layer Capacitor

    NASA Astrophysics Data System (ADS)

    Yamada, Tetsu; Yamashiro, Susumu; Sasaki, Masakazu; Araki, Shuuichi

    Various models for the equivalent circuit of EDLC (Electric Double Layer Capacitor) have been presented so far. The multi-stage connection of RC circuit is a representative model to simulate the EDLC's charge-discharge characteristic. However, since high energy density type EDLC for electric power storage has the electrostatic capacity of thousands F, the phenomenon of being almost uninfluential for the case of conventional capacitor appears in an actual measurement notably. To overcome this difficulty, we develop an equivalent circuit model using a nonlinear model that considers the voltage dependency of the electrostatic capacity in this paper. After various simulations and comparison with experimental results, we confirmed the effectiveness of the proposed model.

  4. Compact model for short-channel symmetric double-gate junctionless transistors

    NASA Astrophysics Data System (ADS)

    Ávila-Herrera, F.; Cerdeira, A.; Paz, B. C.; Estrada, M.; Íñiguez, B.; Pavanello, M. A.

    2015-09-01

    In this work a compact analytical model for short-channel double-gate junctionless transistor is presented, considering variable mobility and the main short-channel effects as threshold voltage roll-off, series resistance, drain saturation voltage, channel shortening and saturation velocity. The threshold voltage shift and subthreshold slope variation is determined through the minimum value of the potential in the channel. Only eight model parameters are used. The model is physically-based, considers the total charge in the Si layer and the operating conditions in both depletion and accumulation. Model is validated by 2D simulations in ATLAS for channel lengths from 25 nm to 500 nm and for doping concentrations of 5 × 1018 and 1 × 1019 cm-3, as well as for Si layer thickness of 10 and 15 nm, in order to guarantee normally-off operation of the transistors. The model provides an accurate continuous description of the transistor behavior in all operating regions.

  5. Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination.

    PubMed

    Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi

    2017-02-20

    With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments.

  6. Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination

    PubMed Central

    Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi

    2017-01-01

    With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments. PMID:28230753

  7. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  8. Accuracy assessment of single and double difference models for the single epoch GPS compass

    NASA Astrophysics Data System (ADS)

    Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian

    2012-02-01

    The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.

  9. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.

  10. Text-Attentional Convolutional Neural Networks for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-03-28

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  11. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-05-01

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.

  12. Brain Tumor Segmentation using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-03-04

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic Resonance Imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 33 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0:88, 0:83, 0:77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0:78, 0:65, and 0:75 for the complete, core, and enhancing regions, respectively.

  13. Modelling of side-wall angle for optical proximity correction for self-aligned double patterning

    NASA Astrophysics Data System (ADS)

    Moulis, Sylvain; Farys, Vincent; Belledent, Jérôme; Foucher, Johann

    2012-03-01

    The pursuit of even smaller transistors has pushed some technological innovations in the field of lithography. In order to continue following the path of Moore's law, several solutions were proposed: EUV, e-beam and double patterning lithography. As EUV and e-beam lithography are still not ready for mass production for 20nm and 14nm nodes, double patterning lithography will play an important role for these nodes. In this work, we had focused on Self- Aligned Double-Patterning processes which consist in depositing a spacer material on each side of a mandrel exposed during a first lithography stepmaking the pitch to be divided by two after transfer into the substrate, the cutting of unwanted patterns being addressed through a second lithography exposure. In the specific case where spacers are deposited directly on the flanks of the resist, it is crucial to control its profiles as it could induce final CD errors or even spacer collapse. In this work, we will first study with a simple model the influence of the resist profile on the post-etch spacer CD. Then we will show that the placement of Sub-Resolution Assist Features (SRAF) can influence the resist profile and finally, we will see how much control of the spacer and inter-spacer CD we can achieve by tuning SRAF placement.

  14. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  15. Poisson-Helmholtz-Boltzmann model of the electric double layer: analysis of monovalent ionic mixtures.

    PubMed

    Bohinc, Klemen; Shrestha, Ahis; Brumen, Milan; May, Sylvio

    2012-03-01

    In the classical mean-field description of the electric double layer, known as the Poisson-Boltzmann model, ions interact exclusively through their Coulomb potential. Ion specificity can arise through solvent-mediated, nonelectrostatic interactions between ions. We employ the Yukawa pair potential to model the presence of nonelectrostatic interactions. The combination of Yukawa and Coulomb potential on the mean-field level leads to the Poisson-Helmholtz-Boltzmann model, which employs two auxiliary potentials: one electrostatic and the other nonelectrostatic. In the present work we apply the Poisson-Helmholtz-Boltzmann model to ionic mixtures, consisting of monovalent cations and anions that exhibit different Yukawa interaction strengths. As a specific example we consider a single charged surface in contact with a symmetric monovalent electrolyte. From the minimization of the mean-field free energy we derive the Poisson-Boltzmann and Helmholtz-Boltzmann equations. These nonlinear equations can be solved analytically in the weak perturbation limit. This together with numerical solutions in the nonlinear regime suggests an intricate interplay between electrostatic and nonelectrostatic interactions. The structure and free energy of the electric double layer depends sensitively on the Yukawa interaction strengths between the different ion types and on the nonelectrostatic interactions of the mobile ions with the surface.

  16. Analytical model of LDMOS with a double step buried oxide layer

    NASA Astrophysics Data System (ADS)

    Yuan, Song; Duan, Baoxing; Cao, Zhen; Guo, Haijun; Yang, Yintang

    2016-09-01

    In this paper, a two-dimensional analytical model is established for the Buried Oxide Double Step Silicon On Insulator structure proposed by the authors. Based on the two-dimensional Poisson equation, the analytic expressions of the surface electric field and potential distributions for the device are achieved. In the BODS (Buried Oxide Double Step Silicon On Insulator) structure, the buried oxide layer thickness changes stepwise along the drift region, and the positive charge in the drift region can be accumulated at the corner of the step. These accumulated charge function as the space charge in the depleted drift region. At the same time, the electric field in the oxide layer also varies with the different drift region thickness. These variations especially the accumulated charge will modulate the surface electric field distribution through the electric field modulation effects, which makes the surface electric field distribution more uniform. As a result, the breakdown voltage of the device is improved by 30% compared with the conventional SOI structure. To verify the accuracy of the analytical model, the device simulation software ISE TCAD is utilized, the analytical values are in good agreement with the simulation results by the simulation software. That means the established two-dimensional analytical model for BODS structure is valid, and it also illustrates the breakdown voltage enhancement by the electric field modulation effect sufficiently. The established analytical models will provide the physical and mathematical basis for further analysis of the new power devices with the patterned buried oxide layer.

  17. Numerical Well Testing Interpretation Model and Applications in Crossflow Double-Layer Reservoirs by Polymer Flooding

    PubMed Central

    Guo, Hui; He, Youwei; Li, Lei; Du, Song; Cheng, Shiqing

    2014-01-01

    This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR). PMID:25302335

  18. Numerical well testing interpretation model and applications in crossflow double-layer reservoirs by polymer flooding.

    PubMed

    Yu, Haiyang; Guo, Hui; He, Youwei; Xu, Hainan; Li, Lei; Zhang, Tiantian; Xian, Bo; Du, Song; Cheng, Shiqing

    2014-01-01

    This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR).

  19. Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification

    NASA Astrophysics Data System (ADS)

    Salamon, Justin; Bello, Juan Pablo

    2017-03-01

    The ability of deep convolutional neural networks (CNN) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep convolutional neural network architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a "shallow" dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.

  20. Atomistic simulation of nanoporous layered double hydroxide materials and their properties. I. Structural modeling.

    PubMed

    Kim, Nayong; Kim, Yongman; Tsotsis, Theodore T; Sahimi, Muhammad

    2005-06-01

    An atomistic model of layered double hydroxides, an important class of nanoporous materials, is presented. These materials have wide applications, ranging from adsorbents for gases and liquid ions to nanoporous membranes and catalysts. They consist of two types of metallic cations that are accommodated by a close-packed configuration of OH- and other anions in a positively charged brucitelike layer. Water and various anions are distributed in the interlayer space for charge compensation. A modified form of the consistent-valence force field, together with energy minimization and molecular dynamics simulations, is utilized for developing an atomistic model of the materials. To test the accuracy of the model, we compare the vibrational frequencies, x-ray diffraction patterns, and the basal spacing of the material, computed using the atomistic model, with our experimental data over a wide range of temperature. Good agreement is found between the computed and measured quantities.

  1. Atomistic simulation of nanoporous layered double hydroxide materials and their properties. I. Structural modeling

    NASA Astrophysics Data System (ADS)

    Kim, Nayong; Kim, Yongman; Tsotsis, Theodore T.; Sahimi, Muhammad

    2005-06-01

    An atomistic model of layered double hydroxides, an important class of nanoporous materials, is presented. These materials have wide applications, ranging from adsorbents for gases and liquid ions to nanoporous membranes and catalysts. They consist of two types of metallic cations that are accommodated by a close-packed configuration of OH- and other anions in a positively charged brucitelike layer. Water and various anions are distributed in the interlayer space for charge compensation. A modified form of the consistent-valence force field, together with energy minimization and molecular dynamics simulations, is utilized for developing an atomistic model of the materials. To test the accuracy of the model, we compare the vibrational frequencies, x-ray diffraction patterns, and the basal spacing of the material, computed using the atomistic model, with our experimental data over a wide range of temperature. Good agreement is found between the computed and measured quantities.

  2. Dense Semantic Labeling of Subdecimeter Resolution Images With Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Volpi, Michele; Tuia, Devis

    2017-02-01

    Semantic labeling (or pixel-level land-cover classification) in ultra-high resolution imagery (< 10cm) requires statistical models able to learn high level concepts from spatial data, with large appearance variations. Convolutional Neural Networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction. In this paper we present a CNN-based system relying on an downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including i) state-of-the-art numerical accuracy, ii) improved geometric accuracy of predictions and iii) high efficiency at inference time. We test the proposed system on the Vaihingen and Potsdam sub-decimeter resolution datasets, involving semantic labeling of aerial images of 9cm and 5cm resolution, respectively. These datasets are composed by many large and fully annotated tiles allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures to the proposed one: standard patch classification, prediction of local label patches by employing only convolutions and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time.

  3. Double-sigmoid model for fitting fatigue profiles in mouse fast- and slow-twitch muscle.

    PubMed

    Cairns, S P; Robinson, D M; Loiselle, D S

    2008-07-01

    We present a curve-fitting approach that permits quantitative comparisons of fatigue profiles obtained with different stimulation protocols in isolated slow-twitch soleus and fast-twitch extensor digitorum longus (EDL) muscles of mice. Profiles from our usual stimulation protocol (125 Hz for 500 ms, evoked once every second for 100-300 s) could be fitted by single-term functions (sigmoids or exponentials) but not by a double exponential. A clearly superior fit, as confirmed by the Akaiki Information Criterion, was achieved using a double-sigmoid function. Fitting accuracy was exceptional; mean square errors were typically <1% and r(2) > 0.9995. The first sigmoid (early fatigue) involved approximately 10% decline of isometric force to an intermediate plateau in both muscle types; the second sigmoid (late fatigue) involved a reduction of force to a final plateau, the decline being 83% of initial force in EDL and 63% of initial force in soleus. The maximal slope of each sigmoid was seven- to eightfold greater in EDL than in soleus. The general applicability of the model was tested by fitting profiles with a severe force loss arising from repeated tetanic stimulation evoked at different frequencies or rest periods, or with excitation via nerve terminals in soleus. Late fatigue, which was absent at 30 Hz, occurred earlier and to a greater extent at 125 than 50 Hz. The model captured small changes in rate of late fatigue for nerve terminal versus sarcolemmal stimulation. We conclude that a double-sigmoid expression is a useful and accurate model to characterize fatigue in isolated muscle preparations.

  4. Numerical modeling of Subthreshold region of junctionless double surrounding gate MOSFET (JLDSG)

    NASA Astrophysics Data System (ADS)

    Rewari, Sonam; Haldar, Subhasis; Nath, Vandana; Deswal, S. S.; Gupta, R. S.

    2016-02-01

    In this paper, Numerical Model for Electric Potential, Subthreshold Current and Subthreshold Swing for Junctionless Double Surrounding Gate(JLDSG) MOSFEThas been developed using superposition method. The results have also been evaluated for different silicon film thickness, oxide film thickness and channel length. The numerical results so obtained are in good agreement with the simulated data. Also, the results of JLDSG MOSFET have been compared with the conventional Junctionless Surrounding Gate (JLSG) MOSFET and it is observed that JLDSG MOSFET has improved drain currents, transconductance, outputconductance, Transconductance Generation Factor (TGF) and Subthreshold Slope.

  5. Experimental investigation of shock wave diffraction over a single- or double-sphere model

    NASA Astrophysics Data System (ADS)

    Zhang, L. T.; Wang, T. H.; Hao, L. N.; Huang, B. Q.; Chen, W. J.; Shi, H. H.

    2017-01-01

    In this study, the unsteady drag produced by the interaction of a shock wave with a single- and a double-sphere model is measured using imbedded accelerometers. The shock wave is generated in a horizontal circular shock tube with an inner diameter of 200 mm. The effect of the shock Mach number and the dimensionless distance between spheres is investigated. The time-history of the drag coefficient is obtained based on Fast Fourier Transformation (FFT) band-block filtering and polynomial fitting of the measured acceleration. The measured peak values of the drag coefficient, with the associated uncertainty, are reported.

  6. Double pendulum model for a tennis stroke including a collision process

    NASA Astrophysics Data System (ADS)

    Youn, Sun-Hyun

    2015-10-01

    By means of adding a collision process between the ball and racket in the double pendulum model, we analyzed the tennis stroke. The ball and the racket system may be accelerated during the collision time; thus, the speed of the rebound ball does not simply depend on the angular velocity of the racket. A higher angular velocity sometimes gives a lower rebound ball speed. We numerically showed that the proper time-lagged racket rotation increased the speed of the rebound ball by 20%. We also showed that the elbow should move in the proper direction in order to add the angular velocity of the racket.

  7. Theoretical model for a background noise limited laser-excited optical filter for doubled Nd lasers

    NASA Astrophysics Data System (ADS)

    Shay, Thomas M.; Garcia, Daniel F.

    1990-06-01

    A simple theoretical model for the calculation of the dependence of filter quantum efficiency versus laser pump power in an atomic Rb vapor laser-excited optical filter is reported. Calculations for Rb filter transitions that can be used to detect the practical and important frequency-doubled Nd lasers are presented. The results of these calculations show the filter's quantum efficiency versus the laser pump power. The required laser pump powers required range from 2.4 to 60 mW/sq cm of filter aperture.

  8. Theoretical model for a background noise limited laser-excited optical filter for doubled Nd lasers

    NASA Technical Reports Server (NTRS)

    Shay, Thomas M.; Garcia, Daniel F.

    1990-01-01

    A simple theoretical model for the calculation of the dependence of filter quantum efficiency versus laser pump power in an atomic Rb vapor laser-excited optical filter is reported. Calculations for Rb filter transitions that can be used to detect the practical and important frequency-doubled Nd lasers are presented. The results of these calculations show the filter's quantum efficiency versus the laser pump power. The required laser pump powers required range from 2.4 to 60 mW/sq cm of filter aperture.

  9. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    PubMed

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  10. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    PubMed Central

    Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method. PMID:27847827

  11. Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding

    PubMed Central

    Johnson, Rie; Zhang, Tong

    2016-01-01

    This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks. PMID:27087766

  12. Multi-Scale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification.

    PubMed

    Wang, Qiangchang; Zheng, Yuanjie; Yang, Gongping; Jin, Weidong; Chen, Xinjian; Yin, Yilong

    2017-03-21

    We propose a new Multi-scale Rotation-invariant Convolutional Neural Network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography (HRCT). MRCNN employs Gabor-local binary pattern (Gabor-LBP) which introduces a good property in image analysis - invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public Interstitial Lung Disease (ILD) database show a superior performance of the proposed method to state-of-the-art.

  13. Architectural style classification of Mexican historical buildings using deep convolutional neural networks and sparse features

    NASA Astrophysics Data System (ADS)

    Obeso, Abraham Montoya; Benois-Pineau, Jenny; Acosta, Alejandro Álvaro Ramirez; Vázquez, Mireya Saraí García

    2017-01-01

    We propose a convolutional neural network to classify images of buildings using sparse features at the network's input in conjunction with primary color pixel values. As a result, a trained neuronal model is obtained to classify Mexican buildings in three classes according to the architectural styles: prehispanic, colonial, and modern with an accuracy of 88.01%. The problem of poor information in a training dataset is faced due to the unequal availability of cultural material. We propose a data augmentation and oversampling method to solve this problem. The results are encouraging and allow for prefiltering of the content in the search tasks.

  14. Predicting polarization signatures for double-detonation and delayed-detonation models of Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Bulla, M.; Sim, S. A.; Kromer, M.; Seitenzahl, I. R.; Fink, M.; Ciaraldi-Schoolmann, F.; Röpke, F. K.; Hillebrandt, W.; Pakmor, R.; Ruiter, A. J.; Taubenberger, S.

    2016-10-01

    Calculations of synthetic spectropolarimetry are one means to test multidimensional explosion models for Type Ia supernovae. In a recent paper, we demonstrated that the violent merger of a 1.1 and 0.9 M⊙ white dwarf binary system is too asymmetric to explain the low polarization levels commonly observed in normal Type Ia supernovae. Here, we present polarization simulations for two alternative scenarios: the sub-Chandrasekhar mass double-detonation and the Chandrasekhar mass delayed-detonation model. Specifically, we study a 2D double-detonation model and a 3D delayed-detonation model, and calculate polarization spectra for multiple observer orientations in both cases. We find modest polarization levels (<1 per cent) for both explosion models. Polarization in the continuum peaks at ˜0.1-0.3 per cent and decreases after maximum light, in excellent agreement with spectropolarimetric data of normal Type Ia supernovae. Higher degrees of polarization are found across individual spectral lines. In particular, the synthetic Si II λ6355 profiles are polarized at levels that match remarkably well the values observed in normal Type Ia supernovae, while the low degrees of polarization predicted across the O I λ7774 region are consistent with the non-detection of this feature in current data. We conclude that our models can reproduce many of the characteristics of both flux and polarization spectra for well-studied Type Ia supernovae, such as SN 2001el and SN 2012fr. However, the two models considered here cannot account for the unusually high level of polarization observed in extreme cases such as SN 2004dt.

  15. Nuclear mean field and double-folding model of the nucleus-nucleus optical potential

    NASA Astrophysics Data System (ADS)

    Khoa, Dao T.; Phuc, Nguyen Hoang; Loan, Doan Thi; Loc, Bui Minh

    2016-09-01

    Realistic density dependent CDM3Yn versions of the M3Y interaction have been used in an extended Hartree-Fock (HF) calculation of nuclear matter (NM), with the nucleon single-particle potential determined from the total NM energy based on the Hugenholtz-van Hove theorem that gives rise naturally to a rearrangement term (RT). Using the RT of the single-nucleon potential obtained exactly at different NM densities, the density and energy dependence of the CDM3Yn interactions was modified to account properly for both the RT and observed energy dependence of the nucleon optical potential. Based on a local density approximation, the double-folding model of the nucleus-nucleus optical potential has been extended to take into account consistently the rearrangement effect and energy dependence of the nuclear mean-field potential, using the modified CDM3Yn interactions. The extended double-folding model was applied to study the elastic 12C+12C and 16O+12C scattering at the refractive energies, where the Airy structure of the nuclear rainbow has been well established. The RT was found to affect significantly the real nucleus-nucleus optical potential at small internuclear distances, giving a potential strength close to that implied by the realistic optical model description of the Airy oscillation.

  16. Simulation of the conformation and dynamics of a double-helical model for DNA.

    PubMed Central

    Huertas, M L; Navarro, S; Lopez Martinez, M C; García de la Torre, J

    1997-01-01

    We propose a partially flexible, double-helical model for describing the conformational and dynamic properties of DNA. In this model, each nucleotide is represented by one element (bead), and the known geometrical features of the double helix are incorporated in the equilibrium conformation. Each bead is connected to a few neighbor beads in both strands by means of stiff springs that maintain the connectivity but still allow for some extent of flexibility and internal motion. We have used Brownian dynamics simulation to sample the conformational space and monitor the overall and internal dynamics of short DNA pieces, with up to 20 basepairs. From Brownian trajectories, we calculate the dimensions of the helix and estimate its persistence length. We obtain translational diffusion coefficient and various rotational relaxation times, including both overall rotation and internal motion. Although we have not carried out a detailed parameterization of the model, the calculated properties agree rather well with experimental data available for those oligomers. Images FIGURE 3 PMID:9414226

  17. Modeling and simulation study of novel Double Gate Ferroelectric Junctionless (DGFJL) transistor

    NASA Astrophysics Data System (ADS)

    Mehta, Hema; Kaur, Harsupreet

    2016-09-01

    In this work we have proposed an analytical model for Double Gate Ferroelectric Junctionless Transistor (DGFJL), a novel device, which incorporates the advantages of both Junctionless (JL) transistor and Negative Capacitance phenomenon. A complete drain current model has been developed by using Landau-Khalatnikov equation and parabolic potential approximation to analyze device behavior in different operating regions. It has been demonstrated that DGFJL transistor acts as a step-up voltage transformer and exhibits subthreshold slope values less than 60 mV/dec. In order to assess the advantages offered by the proposed device, extensive comparative study has been done with equivalent Double Gate Junctionless Transistor (DGJL) transistor with gate insulator thickness same as ferroelectric gate stack thickness of DGFJL transistor. It is shown that incorporation of ferroelectric layer can overcome the variability issues observed in JL transistors. The device has been studied over a wide range of parameters and bias conditions to comprehensively investigate the device design guidelines to obtain a better insight into the application of DGFJL as a potential candidate for future technology nodes. The analytical results so derived from the model have been verified with simulated results obtained using ATLAS TCAD simulator and a good agreement has been found.

  18. A bilayer Double Semion model with symmetry-enriched topological order

    NASA Astrophysics Data System (ADS)

    Ortiz, L.; Martin-Delgado, M. A.

    2016-12-01

    We construct a new model of two-dimensional quantum spin systems that combines intrinsic topological orders and a global symmetry called flavour symmetry. It is referred as the bilayer Doubled Semion model (bDS) and is an instance of symmetry-enriched topological order. A honeycomb bilayer lattice is introduced to combine a Double Semion Topological Order with a global spin-flavour symmetry to get the fractionalization of its quasiparticles. The bDS model exhibits non-trivial braiding self-statistics of excitations and its dual model constitutes a Symmetry-Protected Topological Order with novel edge states. This dual model gives rise to a bilayer Non-Trivial Paramagnet that is invariant under the flavour symmetry and the well-known spin flip symmetry. On the one hand, the Hermele model is constructed with a square lattice in a multilayer structure that forms a quasi-three-dimensional model, but the square lattice cannot support a DS model. (see Appendix C and [39]). On the other hand, the Levin-Gu method is realized on a single hexagonal layer, but we would need a multilayer realization of that construction. This is problematic since the necessary coordination condition (3) is incompatible with a multilayer structure of honeycomb layers. Interestingly enough, we can rephrase this compatibility problem between these two fractionalization methods as a compatibility condition between global symmetries. The key point is to realize that the Levin-Gu method deals with a spin-flip symmetry, e.g. G = Z2fs, explicitly shown in the spin model introduced in Section 4, while the Hermele method is about a spin-flavour symmetry among lattice layers, e.g. G = Z2fv. This spin-favour symmetry is present explicitly in the string model presented in Eq. (26).We hereby summarize briefly some of our main results:i/ We have constructed a bilayer Doubled Semion (bDS) model that has intrinsic topological orders of type G =Z2 and is

  19. A DOUBLE-RING ALGORITHM FOR MODELING SOLAR ACTIVE REGIONS: UNIFYING KINEMATIC DYNAMO MODELS AND SURFACE FLUX-TRANSPORT SIMULATIONS

    SciTech Connect

    Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu; Yeates, Anthony R. E-mail: dnandi@iiserkol.ac.i E-mail: anthony@maths.dundee.ac.u

    2010-09-01

    The emergence of tilted bipolar active regions (ARs) and the dispersal of their flux, mediated via processes such as diffusion, differential rotation, and meridional circulation, is believed to be responsible for the reversal of the Sun's polar field. This process (commonly known as the Babcock-Leighton mechanism) is usually modeled as a near-surface, spatially distributed {alpha}-effect in kinematic mean-field dynamo models. However, this formulation leads to a relationship between polar field strength and meridional flow speed which is opposite to that suggested by physical insight and predicted by surface flux-transport simulations. With this in mind, we present an improved double-ring algorithm for modeling the Babcock-Leighton mechanism based on AR eruption, within the framework of an axisymmetric dynamo model. Using surface flux-transport simulations, we first show that an axisymmetric formulation-which is usually invoked in kinematic dynamo models-can reasonably approximate the surface flux dynamics. Finally, we demonstrate that our treatment of the Babcock-Leighton mechanism through double-ring eruption leads to an inverse relationship between polar field strength and meridional flow speed as expected, reconciling the discrepancy between surface flux-transport simulations and kinematic dynamo models.

  20. Preliminary results from a four-working space, double-acting piston, Stirling engine controls model

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1980-01-01

    A four working space, double acting piston, Stirling engine simulation is being developed for controls studies. The development method is to construct two simulations, one for detailed fluid behavior, and a second model with simple fluid behaviour but containing the four working space aspects and engine inertias, validate these models separately, then upgrade the four working space model by incorporating the detailed fluid behaviour model for all four working spaces. The single working space (SWS) model contains the detailed fluid dynamics. It has seven control volumes in which continuity, energy, and pressure loss effects are simulated. Comparison of the SWS model with experimental data shows reasonable agreement in net power versus speed characteristics for various mean pressure levels in the working space. The four working space (FWS) model was built to observe the behaviour of the whole engine. The drive dynamics and vehicle inertia effects are simulated. To reduce calculation time, only three volumes are used in each working space and the gas temperature are fixed (no energy equation). Comparison of the FWS model predicted power with experimental data shows reasonable agreement. Since all four working spaces are simulated, the unique capabilities of the model are exercised to look at working fluid supply transients, short circuit transients, and piston ring leakage effects.

  1. Double-layer parallelization for hydrological model calibration on HPC systems

    NASA Astrophysics Data System (ADS)

    Zhang, Ang; Li, Tiejian; Si, Yuan; Liu, Ronghua; Shi, Haiyun; Li, Xiang; Li, Jiaye; Wu, Xia

    2016-04-01

    Large-scale problems that demand high precision have remarkably increased the computational time of numerical simulation models. Therefore, the parallelization of models has been widely implemented in recent years. However, computing time remains a major challenge when a large model is calibrated using optimization techniques. To overcome this difficulty, we proposed a double-layer parallel system for hydrological model calibration using high-performance computing (HPC) systems. The lower-layer parallelism is achieved using a hydrological model, the Digital Yellow River Integrated Model, which was parallelized by decomposing river basins. The upper-layer parallelism is achieved by simultaneous hydrological simulations with different parameter combinations in the same generation of the genetic algorithm and is implemented using the job scheduling functions of an HPC system. The proposed system was applied to the upstream of the Qingjian River basin, a sub-basin of the middle Yellow River, to calibrate the model effectively by making full use of the computing resources in the HPC system and to investigate the model's behavior under various parameter combinations. This approach is applicable to most of the existing hydrology models for many applications.

  2. DEVELOPMENT OF ANSYS FINITE ELEMENT MODELS FOR SINGLE SHELL TANK (SST) & DOUBLE SHELL TANK (DST) TANKS

    SciTech Connect

    JULYK, L.J.; MACKEY, T.C.

    2003-06-19

    Summary report of ANSYS finite element models developed for dome load analysis of Hanford 100-series single-shell tanks and double-shell tanks. Document provides user interface for selecting proper tank model and changing of analysis parameters for tank specific analysis. Current dome load restrictions for the Hanford Site underground waste storage tanks are based on existing analyses of record (AOR) that evaluated the tanks for a specific set of design load conditions. However, greater flexibility is required in controlling dome loadings applied to the tanks due to day-to-day operations and waste retrieval activities. This requires the development of an analytical model with sufficient detail to evaluate various dome loading conditions not specifically addressed in the AOR.

  3. Emulating the one-dimensional Fermi-Hubbard model by a double chain of qubits

    NASA Astrophysics Data System (ADS)

    Reiner, Jan-Michael; Marthaler, Michael; Braumüller, Jochen; Weides, Martin; Schön, Gerd

    2016-09-01

    The Jordan-Wigner transformation maps a one-dimensional (1D) spin-1 /2 system onto a fermionic model without spin degree of freedom. A double chain of quantum bits with X X and Z Z couplings of neighboring qubits along and between the chains, respectively, can be mapped on a spin-full 1D Fermi-Hubbard model. The qubit system can thus be used to emulate the quantum properties of this model. We analyze physical implementations of such analog quantum simulators, including one based on transmon qubits, where the Z Z interaction arises due to an inductive coupling and the X X interaction due to a capacitive interaction. We propose protocols to gain confidence in the results of the simulation through measurements of local operators.

  4. Impact of stray charge on interconnect wire via probability model of double-dot system

    NASA Astrophysics Data System (ADS)

    Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang

    2016-02-01

    The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).

  5. Frequency analysis of tick quotes on foreign currency markets and the double-threshold agent model

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2006-09-01

    Power spectrum densities for the number of tick quotes per minute (market activity) on three currency markets (USD/JPY, EUR/USD, and JPY/EUR) are analyzed for periods from January 2000 to December 2000. We find some peaks on the power spectrum densities at a few minutes. We develop the double-threshold agent model and confirm that the corresponding periodicity can be observed for the activity of this model even though market participants perceive common weaker periodic information than threshold for decision-making of them. This model is numerically performed and theoretically investigated by utilizing the mean-field approximation. We propose a hypothesis that the periodicities found on the power spectrum densities can be observed due to nonlinearity and diversity of market participants.

  6. Effects of a random porosity model on double diffusive natural convection in a porous medium enclosure

    SciTech Connect

    Fu, W.S.; Ke, W.W.

    2000-01-01

    A double diffusive natural convection in a rectangular enclosure filled with porous medium is investigated numerically. The distribution of porosity is based upon the random porosity model. The Darcy-Brinkman-Forchheimer model is used and the factors of heat flux, mean porosity and standard deviation are taken into consideration. The SIMPLEC method with iterative processes is adopted to solve the governing equations. The effects of the random porosity model on the distributions of local Nusselt number are remarkable and the variations of the local Nusselt number become disordered. The contribution of latent heat transfer to the total heat transfer of the high Rayleigh number is larger than that of the low Rayleigh number and the variations of the latent heat transfer are not in order.

  7. Shell-Model Calculations of Two-Nucleon Tansfer Related to Double Beta Decay

    NASA Astrophysics Data System (ADS)

    Brown, Alex

    2013-10-01

    I will discuss theoretical results for two-nucleon transfer cross sections for nuclei in the regions of 48Ca, 76Ge and 136Xe of interest for testing the wavefuntions used for the nuclear matrix elements in double-beta decay. Various reaction models are used. A simple cluster transfer model gives relative cross sections. Thompson's code Fresco with direct and sequential transfer is used for absolute cross sections. Wavefunctions are obtained in large-basis proton-neutron coupled model spaces with the code NuShellX with realistic effecive Hamiltonians such as those used for the recent results for 136Xe [M. Horoi and B. A. Brown, Phys. Rev. Lett. 110, 222502 (2013)]. I acknowledge support from NSF grant PHY-1068217.

  8. Single-center model for double photoionization of the H{sub 2} molecule

    SciTech Connect

    Kheifets, A.S.

    2005-02-01

    We present a single-center model of double photoionization (DPI) of the H{sub 2} molecule which combines a multiconfiguration expansion of the molecular ground state with the convergent close-coupling description of the two-electron continuum. Because the single-center final-state wave function is only correct in the asymptotic region of large distances, the model cannot predict the magnitude of the DPI cross sections. However, we expect the model to account for the angular correlation in the two-electron continuum and to reproduce correctly the shape of the fully differential DPI cross sections. We test this assumption in kinematics of recent DPI experiments on the randomly oriented and fixed in space hydrogen molecule in the isotopic form of D{sub 2}.

  9. A double hit model for the distribution of time to AIDS onset

    NASA Astrophysics Data System (ADS)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  10. Modeling and interpretation of Q logs in carbonate rock using a double porosity model and well logs

    NASA Astrophysics Data System (ADS)

    Parra, Jorge O.; Hackert, Chris L.

    2006-03-01

    Attenuation data extracted from full waveform sonic logs is sensitive to vuggy and matrix porosities in a carbonate aquifer. This is consistent with the synthetic attenuation (1 / Q) as a function of depth at the borehole-sonic source-peak frequency of 10 kHz. We use velocity and densities versus porosity relationships based on core and well log data to determine the matrix, secondary, and effective bulk moduli. The attenuation model requires the bulk modulus of the primary and secondary porosities. We use a double porosity model that allows us to investigate attenuation at the mesoscopic scale. Thus, the secondary and primary porosities in the aquifer should respond with different changes in fluid pressure. The results show a high permeability region with a Q that varies from 25 to 50 and correlates with the stiffer part of the carbonate formation. This pore structure permits water to flow between the interconnected vugs and the matrix. In this region the double porosity model predicts a decrease in the attenuation at lower frequencies that is associated with fluid flowing from the more compliant high-pressure regions (interconnected vug space) to the relatively stiff, low-pressure regions (matrix). The chalky limestone with a low Q of 17 is formed by a muddy porous matrix with soft pores. This low permeability region correlates with the low matrix bulk modulus. A low Q of 18 characterizes the soft sandy carbonate rock above the vuggy carbonate. This paper demonstrates the use of attenuation logs for discriminating between lithology and provides information on the pore structure when integrated with cores and other well logs. In addition, the paper demonstrates the practical application of a new double porosity model to interpret the attenuation at sonic frequencies by achieving a good match between measured and modeled attenuation.

  11. Double ITCZ in Coupled Ocean-Atmosphere Models: From CMIP3 to CMIP5

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao; Liu, Hailong; Zhang, Minghua

    2015-10-01

    Recent progress in reducing the double Intertropical Convergence Zone bias in coupled climate models is examined based on multimodel ensembles of historical climate simulations from Phase 3 and Phase 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5). Biases common to CMIP3 and CMIP5 models include spurious precipitation maximum in the southeastern Pacific, warmer sea surface temperature (SST), weaker easterly, and stronger meridional wind divergences away from the equator relative to observations. It is found that there is virtually no improvement in all these measures from the CMIP3 ensemble to the CMIP5 ensemble models. The five best models in the two ensembles as measured by the spatial correlations are also assessed. No progress can be identified in the subensembles of the five best models from CMIP3 to CMIP5 even though more models participated in CMIP5; the biases of excessive precipitation and overestimated SST in southeastern Pacific are even worse in the CMIP5 models.

  12. Dynamic Flow Modeling Using Double POD and ANN-ARX System Identification

    NASA Astrophysics Data System (ADS)

    Siegel, Stefan; Seidel, Jürgen; Cohen, Kelly; Aradag, Selin; McLaughlin, Thomas

    2007-11-01

    Double Proper Orthogonal Decomposition (DPOD), a modification of conventional POD, is a powerful tool for modeling of transient flow field spatial features, in particular, a 2D cylinder wake at a Reynolds number of 100. To develop a model for control design, the interaction of DPOD mode amplitudes with open-loop control inputs needs to be captured. Traditionally, Galerkin projection onto the Navier Stokes equations has been used for that purpose. Given the stability problems as well as issues in correctly modeling actuation input, we propose a different approach. We demonstrate that the ARX (Auto Regressive eXternal input) system identification method in connection with an Artificial Neural Network (ANN) nonlinear structure leads to a model that captures the dynamic behavior of the unforced and transient forced open loop data used for model development. Moreover, we also show that the model is valid at different Reynolds numbers, for different open loop forcing parameters, as well as for closed loop flow states with excellent accuracy. Thus, we present with this DPOD-ANN-ARX model a paradigm shift for laminar circular cylinder wake modeling that is proven valid for feedback flow controller development.

  13. Model of a double-sided surface plasmon resonance fiber-optic sensor

    NASA Astrophysics Data System (ADS)

    Ciprian, Dalibor; Hlubina, Petr

    2014-12-01

    A model of a surface plasmon resonance fiber-optic sensor with a double-sided metallic layer is presented. Most of such fiber optic sensing configurations are based on a symmetric circular metal layer deposited on a bare fiber core used for excitation of surface plasmon waves. To deposit a homogeneous layer, the fiber sample has to be continually rotated during deposition process, so the deposition chamber has to be equipped with an appropriate positioning device. This difficulty can be avoided when the layer is deposited in two steps without the rotation during the deposition (double-sided deposition). The technique is simpler, but in this case, the layer is not at and a radial thickness gradient is imposed. Consequently, the sensor starts to be sensitive to polarization of excitation light beam. A theoretical model is used to explain the polarization properties of such a sensing configuration. The analysis is carried out in the frame of optics of layered media. Because the multimode optical fiber with large core diameter is assumed, the eccentricity of the outer metal layer boundary imposed by the thickness gradient is low and the contribution of skew rays in the layer is neglected. The effect of the layer thickness gradient on the performance of the sensor is studied using numerical simulations.

  14. Mathematical modeling of macrosegregation of iron carbon binary alloy: Role of double diffusive convection

    SciTech Connect

    Singh, A.K.; Basu, B.

    1995-10-01

    During alloy solidification, macrosegregation results from long range transport of solute under the influence of convective flow and leads to nonuniform quality of a solidified material. The present study is an attempt to understand the role of double diffusive convection resulting from the solutal rejection in the evolution of macrosegregation in an iron carbon system. The solidification process of an alloy is governed by conservation of heat, mass, momentum, and species and is accompanies by the evolution of latent heat and the rejection or incorporation of solute at the solid liquid interface. Using a continuum formulation, the governing equations were solved using the finite volume method. The numerical model was validated by simulating experiments on an ammonium chloride water system reported in the literature. The model was further used to study the role of double diffusive convection in the evolution of macrosegregation during solidification of Fe 1 wt pct c alloy in a rectangular cavity. Simulation of this transient process was carried out until complete solidification, and the results, depicting the influence of flow field on thermal and solutal field and vice versa, are shown at various stages of solidification. Under the given set of parameters, it was found that the thermal buoyancy affects the macrosegregation field globally, whereas the solutal buoyancy has a localized effect.

  15. A dynamic double helical band as a model for cardiac pumping.

    PubMed

    Grosberg, Anna; Gharib, Morteza

    2009-06-01

    We address here, by means of finite-element computational modeling, two features of heart mechanics and, most importantly, their timing relationship: one of them is the ejected volume and the other is the twist of the heart. The corner stone of our approach is to take the double helical muscle fiber band as the dominant active macrostructure behind the pumping function. We show that this double helical model easily reproduces a physiological maximal ejection fraction of up to 60% without exceeding the limit on local muscle fiber contraction of 15%. Moreover, a physiological ejection fraction can be achieved independently of the excitation pattern. The left ventricular twist is also largely independent of the type of excitation. However, the physiological relationship between the ejection fraction and twist can only be reproduced with Purkinje-type excitation schemes. Our results indicate that the proper timing coordination between twist and ejection dynamics can be reproduced only if the excitation front originates in the septum region near the apex. This shows that the timing of the excitation is directly related to the productive pumping operation of the heart and illustrates the direction for possible bioinspired pump design.

  16. Nonresonant Double Hopf Bifurcation in Toxic Phytoplankton-Zooplankton Model with Delay

    NASA Astrophysics Data System (ADS)

    Yuan, Rui; Jiang, Weihua; Wang, Yong

    This paper investigates a toxic phytoplankton-zooplankton model with Michaelis-Menten type phytoplankton harvesting. The model has rich dynamical behaviors. It undergoes transcritical, saddle-node, fold, Hopf, fold-Hopf and double Hopf bifurcation, when the parameters change and go through some of the critical values, the dynamical properties of the system will change also, such as the stability, equilibrium points and the periodic orbit. We first study the stability of the equilibria, and analyze the critical conditions for the above bifurcations at each equilibrium. In addition, the stability and direction of local Hopf bifurcations, and the completion bifurcation set by calculating the universal unfoldings near the double Hopf bifurcation point are given by the normal form theory and center manifold theorem. We obtained that the stable coexistent equilibrium point and stable periodic orbit alternate regularly when the digestion time delay is within some finite value. That is, we derived the pattern for the occurrence, and disappearance of a stable periodic orbit. Furthermore, we calculated the approximation expression of the critical bifurcation curve using the digestion time delay and the harvesting rate as parameters, and determined a large range in terms of the harvesting rate for the phytoplankton and zooplankton to coexist in a long term.

  17. Modeling avian detection probabilities as a function of habitat using double-observer point count data

    USGS Publications Warehouse

    Heglund, P.J.; Nichols, J.D.; Hines, J.E.; Sauer, J.; Fallon, J.; Fallon, F.; Field, Rebecca; Warren, Robert J.; Okarma, Henryk; Sievert, Paul R.

    2001-01-01

    Point counts are a controversial sampling method for bird populations because the counts are not censuses, and the proportion of birds missed during counting generally is not estimated. We applied a double-observer approach to estimate detection rates of birds from point counts in Maryland, USA, and test whether detection rates differed between point counts conducted in field habitats as opposed to wooded habitats. We conducted 2 analyses. The first analysis was based on 4 clusters of counts (routes) surveyed by a single pair of observers. A series of models was developed with differing assumptions about sources of variation in detection probabilities and fit using program SURVIV. The most appropriate model was selected using Akaike's Information Criterion. The second analysis was based on 13 routes (7 woods and 6 field routes) surveyed by various observers in which average detection rates were estimated by route and compared using a t-test. In both analyses, little evidence existed for variation in detection probabilities in relation to habitat. Double-observer methods provide a reasonable means of estimating detection probabilities and testing critical assumptions needed for analysis of point counts.

  18. Dynamic Characteristics of Mechanical Ventilation System of Double Lungs with Bi-Level Positive Airway Pressure Model

    PubMed Central

    Shen, Dongkai; Zhang, Qian

    2016-01-01

    In recent studies on the dynamic characteristics of ventilation system, it was considered that human had only one lung, and the coupling effect of double lungs on the air flow can not be illustrated, which has been in regard to be vital to life support of patients. In this article, to illustrate coupling effect of double lungs on flow dynamics of mechanical ventilation system, a mathematical model of a mechanical ventilation system, which consists of double lungs and a bi-level positive airway pressure (BIPAP) controlled ventilator, was proposed. To verify the mathematical model, a prototype of BIPAP system with a double-lung simulators and a BIPAP ventilator was set up for experimental study. Lastly, the study on the influences of key parameters of BIPAP system on dynamic characteristics was carried out. The study can be referred to in the development of research on BIPAP ventilation treatment and real respiratory diagnostics. PMID:27660646

  19. A three-dimensional statistical mechanical model of folding double-stranded chain molecules

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbing; Chen, Shi-Jie

    2001-05-01

    Based on a graphical representation of intrachain contacts, we have developed a new three-dimensional model for the statistical mechanics of double-stranded chain molecules. The theory has been tested and validated for the cubic lattice chain conformations. The statistical mechanical model can be applied to the equilibrium folding thermodynamics of a large class of chain molecules, including protein β-hairpin conformations and RNA secondary structures. The application of a previously developed two-dimensional model to RNA secondary structure folding thermodynamics generally overestimates the breadth of the melting curves [S-J. Chen and K. A. Dill, Proc. Natl. Acad. Sci. U.S.A. 97, 646 (2000)], suggesting an underestimation for the sharpness of the conformational transitions. In this work, we show that the new three-dimensional model gives much sharper melting curves than the two-dimensional model. We believe that the new three-dimensional model may give much improved predictions for the thermodynamic properties of RNA conformational changes than the previous two-dimensional model.

  20. Verilog-A implementation of a double-gate junctionless compact model for DC circuit simulations

    NASA Astrophysics Data System (ADS)

    Alvarado, J.; Flores, P.; Romero, S.; Ávila-Herrera, F.; González, V.; Soto-Cruz, B. S.; Cerdeira, A.

    2016-07-01

    A physically based model of the double-gate juntionless transistor which is capable of describing accumulation and depletion regions is implemented in Verilog-A in order to perform DC circuit simulations. Analytical description of the difference of potentials between the center and the surface of the silicon layer allows the determination of the mobile charges. Furthermore, mobility degradation, series resistance, as well as threshold voltage roll-off, drain saturation voltage, channel shortening and velocity saturation are also considered. In order to provide this model to all of the community, the implementation of this model is performed in Ngspice, which is a free circuit simulation with an ADMS interface to integrate Verilog-A models. Validation of the model implementation is done through 2D numerical simulations of transistors with 1 μ {{m}} and 40 {{nm}} silicon channel length and 1 × 1019 or 5× {10}18 {{{cm}}}-3 doping concentration of the silicon layer with 10 and 15 {{nm}} silicon thickness. Good agreement between the numerical simulated behavior and model implementation is obtained, where only eight model parameters are used.

  1. Development of kineto-dynamic quarter-car model for synthesis of a double wishbone suspension

    NASA Astrophysics Data System (ADS)

    Balike, K. P.; Rakheja, S.; Stiharu, I.

    2011-02-01

    Linear or nonlinear 2-degrees of freedom (DOF) quarter-car models have been widely used to study the conflicting dynamic performances of a vehicle suspension such as ride quality, road holding and rattle space requirements. Such models, however, cannot account for contributions due to suspension kinematics. Considering the proven simplicity and effectiveness of a quarter-car model for such analyses, this article presents the formulation of a comprehensive kineto-dynamic quarter-car model to study the kinematic and dynamic properties of a linkage suspension, and influences of linkage geometry on selected performance measures. An in-plane 2-DOF model was formulated incorporating the kinematics of a double wishbone suspension comprising an upper control arm, a lower control arm and a strut mounted on the lower control arm. The equivalent suspension and damping rates of the suspension model are analytically derived that could be employed in a conventional quarter-car model. The dynamic responses of the proposed model were evaluated under harmonic and bump/pothole excitations, idealised by positive/negative rounded pulse displacement and compared with those of the linear quarter-car model to illustrate the contributions due to suspension kinematics. The kineto-dynamic model revealed considerable variations in the wheel and damping rates, camber and wheel-track. Owing to the asymmetric kinematic behaviour of the suspension system, the dynamic responses of the kineto-dynamic model were observed to be considerably asymmetric about the equilibrium. The proposed kineto-dynamic model was subsequently applied to study the influences of links geometry in an attempt to seek reduced suspension lateral packaging space without compromising the kinematic and dynamic performances.

  2. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  3. Spectral density of generalized Wishart matrices and free multiplicative convolution.

    PubMed

    Młotkowski, Wojciech; Nowak, Maciej A; Penson, Karol A; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W=XX(†), where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP(⊠s), which for an integer s yield Fuss-Catalan distributions corresponding to a product of s-independent square random matrices, X=X(1)⋯X(s). New formulas for the level densities are derived for s=3 and s=1/3. Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  4. Spectral density of generalized Wishart matrices and free multiplicative convolution

    NASA Astrophysics Data System (ADS)

    Młotkowski, Wojciech; Nowak, Maciej A.; Penson, Karol A.; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W =X X† , where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP⊠s, which for an integer s yield Fuss-Catalan distributions corresponding to a product of s -independent square random matrices, X =X1⋯Xs . New formulas for the level densities are derived for s =3 and s =1 /3 . Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  5. Self-Taught convolutional neural networks for short text clustering.

    PubMed

    Xu, Jiaming; Xu, Bo; Wang, Peng; Zheng, Suncong; Tian, Guanhua; Zhao, Jun; Xu, Bo

    2017-04-01

    Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC(2)), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets.

  6. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  7. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment.

    PubMed

    Kawahara, Jeremy; Brown, Colin J; Miller, Steven P; Booth, Brian G; Chau, Vann; Grunau, Ruth E; Zwicker, Jill G; Hamarneh, Ghassan

    2017-02-01

    We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.

  8. A double-layer based model of ion confinement in electron cyclotron resonance ion source

    SciTech Connect

    Mascali, D. Neri, L.; Celona, L.; Castro, G.; Gammino, S.; Ciavola, G.; Torrisi, G.; Sorbello, G.

    2014-02-15

    The paper proposes a new model of ion confinement in ECRIS, which can be easily generalized to any magnetic configuration characterized by closed magnetic surfaces. Traditionally, ion confinement in B-min configurations is ascribed to a negative potential dip due to superhot electrons, adiabatically confined by the magneto-static field. However, kinetic simulations including RF heating affected by cavity modes structures indicate that high energy electrons populate just a thin slab overlapping the ECR layer, while their density drops down of more than one order of magnitude outside. Ions, instead, diffuse across the electron layer due to their high collisionality. This is the proper physical condition to establish a double-layer (DL) configuration which self-consistently originates a potential barrier; this “barrier” confines the ions inside the plasma core surrounded by the ECR surface. The paper will describe a simplified ion confinement model based on plasma density non-homogeneity and DL formation.

  9. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  10. The dynamics of double slab subduction from numerical and semi-analytic models

    NASA Astrophysics Data System (ADS)

    Holt, A.; Royden, L.; Becker, T. W.

    2015-12-01

    Regional interactions between multiple subducting slabs have been proposed to explain enigmatic slab kinematics in a number of subduction zones, a pertinent example being the rapid pre-collisional plate convergence of India and Eurasia. However, dynamically consistent 3-D numerical models of double subduction have yet to be explored, and so the physics of such double slab systems remain poorly understood. Here we build on the comparison of a fully numerical finite element model (CitcomCU) and a time-dependent semi-analytic subduction models (FAST) presented for single subduction systems (Royden et. al., 2015 AGU Fall Abstract) to explore how subducting slab kinematics, particularly trench and plate motions, can be affected by the presence of an additional slab, with all of the possible slab dip direction permutations. A second subducting slab gives rise to a more complex dynamic pressure and mantle flow fields, and an additional slab pull force that is transmitted across the subduction zone interface. While the general relationships among plate velocity, trench velocity, asthenospheric pressure drop, and plate coupling modes are similar to those observed for the single slab case, we find that multiple subducting slabs can interact with each other and indeed induce slab kinematics that deviate significantly from those observed for the equivalent single slab models. References Jagoutz, O., Royden, L. H., Holt, A. F. & Becker, T. W., 2015, Nature Geo., 8, 10.1038/NGEO2418. Moresi, L. N. & Gurnis, M., 1996, Earth Planet. Sci. Lett., 138, 15-28. Royden, L. H. & Husson, L., 2006, Geophys. J. Int. 167, 881-905. Zhong, S., 2006, J. Geophys. Res., 111, doi: 10.1029/2005JB003972.

  11. Fast convolution quadrature for the wave equation in three dimensions

    NASA Astrophysics Data System (ADS)

    Banjai, L.; Kachanovska, M.

    2014-12-01

    This work addresses the numerical solution of time-domain boundary integral equations arising from acoustic and electromagnetic scattering in three dimensions. The semidiscretization of the time-domain boundary integral equations by Runge-Kutta convolution quadrature leads to a lower triangular Toeplitz system of size N. This system can be solved recursively in an almost linear time (O(Nlog2⁡N)), but requires the construction of O(N) dense spatial discretizations of the single layer boundary operator for the Helmholtz equation. This work introduces an improvement of this algorithm that allows to solve the scattering problem in an almost linear time. The new approach is based on two main ingredients: the near-field reuse and the application of data-sparse techniques. Exponential decay of Runge-Kutta convolution weights wnh(d) outside of a neighborhood of d≈nh (where h is a time step) allows to avoid constructing the near-field (i.e. singular and near-singular integrals) for most of the discretizations of the single layer boundary operators (near-field reuse). The far-field of these matrices is compressed with the help of data-sparse techniques, namely, H-matrices and the high-frequency fast multipole method. Numerical experiments indicate the efficiency of the proposed approach compared to the conventional Runge-Kutta convolution quadrature algorithm.

  12. Fine-grained representation learning in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  13. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  14. Calcium transport in the rabbit superficial proximal convoluted tubule

    SciTech Connect

    Ng, R.C.; Rouse, D.; Suki, W.N.

    1984-09-01

    Calcium transport was studied in isolated S2 segments of rabbit superficial proximal convoluted tubules. 45Ca was added to the perfusate for measurement of lumen-to-bath flux (JlbCa), to the bath for bath-to-lumen flux (JblCa), and to both perfusate and bath for net flux (JnetCa). In these studies, the perfusate consisted of an equilibrium solution that was designed to minimize water flux or electrochemical potential differences (PD). Under these conditions, JlbCa (9.1 +/- 1.0 peq/mm X min) was not different from JblCa (7.3 +/- 1.3 peq/mm X min), and JnetCa was not different from zero, which suggests that calcium transport in the superficial proximal convoluted tubule is due primarily to passive transport. The efflux coefficient was 9.5 +/- 1.2 X 10(-5) cm/s, which was not significantly different from the influx coefficient, 7.0 +/- 1.3 X 10(-5) cm/s. When the PD was made positive or negative with use of different perfusates, net calcium absorption or secretion was demonstrated, respectively, which supports a major role for passive transport. These results indicate that in the superficial proximal convoluted tubule of the rabbit, passive driving forces are the major determinants of calcium transport.

  15. A predictive model to inform adaptive management of double-crested cormorants and fisheries in Michigan

    USGS Publications Warehouse

    Tsehaye, Iyob; Jones, Michael L.; Irwin, Brian J.; Fielder, David G.; Breck, James E.; Luukkonen, David R.

    2015-01-01

    The proliferation of double-crested cormorants (DCCOs; Phalacrocorax auritus) in North America has raised concerns over their potential negative impacts on game, cultured and forage fishes, island and terrestrial resources, and other colonial water birds, leading to increased public demands to reduce their abundance. By combining fish surplus production and bird functional feeding response models, we developed a deterministic predictive model representing bird–fish interactions to inform an adaptive management process for the control of DCCOs in multiple colonies in Michigan. Comparisons of model predictions with observations of changes in DCCO numbers under management measures implemented from 2004 to 2012 suggested that our relatively simple model was able to accurately reconstruct past DCCO population dynamics. These comparisons helped discriminate among alternative parameterizations of demographic processes that were poorly known, especially site fidelity. Using sensitivity analysis, we also identified remaining critical uncertainties (mainly in the spatial distributions of fish vs. DCCO feeding areas) that can be used to prioritize future research and monitoring needs. Model forecasts suggested that continuation of existing control efforts would be sufficient to achieve long-term DCCO control targets in Michigan and that DCCO control may be necessary to achieve management goals for some DCCO-impacted fisheries in the state. Finally, our model can be extended by accounting for parametric or ecological uncertainty and including more complex assumptions on DCCO–fish interactions as part of the adaptive management process.

  16. A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Youssef, Rasha M.; Maher, Hadir M.

    2008-10-01

    A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.

  17. A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures.

    PubMed

    Youssef, Rasha M; Maher, Hadir M

    2008-10-01

    A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.

  18. Use of two-dimensional transmission photoelastic models to study stresses in double-lap bolted joints

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D. H.

    1981-01-01

    The stress distribution in two hole connectors in a double lap joint configuration was studied. The following steps are described: (1) fabrication of photoelastic models of double lap double hole joints designed to determine the stresses in the inner lap; (2) assessment of the effects of joint geometry on the stresses in the inner lap; and (3) quantification of differences in the stresses near the two holes. The two holes were on the centerline of the joint and the joints were loaded in tension, parallel to the centerline. Acrylic slip fit pins through the holes served as fasteners. Two dimensional transmission photoelastic models were fabricated by using transparent acrylic outer laps and a photoelastic model material for the inner laps. It is concluded that the photoelastic fringe patterns which are visible when the models are loaded are due almost entirely to stresses in the inner lap.

  19. Quantitative cellular uptake of double fluorescent core-shelled model submicronic particles

    NASA Astrophysics Data System (ADS)

    Leclerc, Lara; Boudard, Delphine; Pourchez, Jérémie; Forest, Valérie; Marmuse, Laurence; Louis, Cédric; Bin, Valérie; Palle, Sabine; Grosseau, Philippe; Bernache-Assollant, Didier; Cottier, Michèle

    2012-11-01

    The relationship between particles' physicochemical parameters, their uptake by cells and their degree of biological toxicity represent a crucial issue, especially for the development of new technologies such as fabrication of micro- and nanoparticles in the promising field of drug delivery systems. This work was aimed at developing a proof-of-concept for a novel model of double fluorescence submicronic particles that could be spotted inside phagolysosomes. Fluorescein isothiocyanate (FITC) particles were synthesized and then conjugated with a fluorescent pHrodo™ probe, red fluorescence of which increases in acidic conditions such as within lysosomes. After validation in acellular conditions by spectral analysis with confocal microscopy and dynamic light scattering, quantification of phagocytosis was conducted on a macrophage cell line in vitro. The biological impact of pHrodo functionalization (cytotoxicity, inflammatory response, and oxidative stress) was also investigated. Results validate the proof-of-concept of double fluorescent particles (FITC + pHrodo), allowing detection of entirely engulfed pHrodo particles (green and red labeling). Moreover incorporation of pHrodo had no major effects on cytotoxicity compared to particles without pHrodo, making them a powerful tool for micro- and nanotechnologies.

  20. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  1. Double Point Source W-phase Inversion: Real-time Implementation and Automated Model Selection

    NASA Astrophysics Data System (ADS)

    Nealy, J. L.; Hayes, G. P.

    2015-12-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever-evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquakes. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match previously published analyses of the same events.

  2. Double point source W-phase inversion: Real-time implementation and automated model selection

    NASA Astrophysics Data System (ADS)

    Nealy, Jennifer L.; Hayes, Gavin P.

    2015-12-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  3. Hyperpolarization-Activated Current Induces Period-Doubling Cascades and Chaos in a Cold Thermoreceptor Model.

    PubMed

    Xu, Kesheng; Maidana, Jean P; Caviedes, Mauricio; Quero, Daniel; Aguirre, Pablo; Orio, Patricio

    2017-01-01

    In this article, we describe and analyze the chaotic behavior of a conductance-based neuronal bursting model. This is a model with a reduced number of variables, yet it retains biophysical plausibility. Inspired by the activity of cold thermoreceptors, the model contains a persistent Sodium current, a Calcium-activated Potassium current and a hyperpolarization-activated current (Ih) that drive a slow subthreshold oscillation. Driven by this oscillation, a fast subsystem (fast Sodium and Potassium currents) fires action potentials in a periodic fashion. Depending on the parameters, this model can generate a variety of firing patterns that includes bursting, regular tonic and polymodal firing. Here we show that the transitions between different firing patterns are often accompanied by a range of chaotic firing, as suggested by an irregular, non-periodic firing pattern. To confirm this, we measure the maximum Lyapunov exponent of the voltage trajectories, and the Lyapunov exponent and Lempel-Ziv's complexity of the ISI time series. The four-variable slow system (without spiking) also generates chaotic behavior, and bifurcation analysis shows that this is often originated by period doubling cascades. Either with or without spikes, chaos is no longer generated when the Ih is removed from the system. As the model is biologically plausible with biophysically meaningful parameters, we propose it as a useful tool to understand chaotic dynamics in neurons.

  4. Predicting the severity of spurious "double ITCZ" problem in CMIP5 coupled models from AMIP simulations

    NASA Astrophysics Data System (ADS)

    Xiang, Baoqiang; Zhao, Ming; Held, Isaac M.; Golaz, Jean-Christophe

    2017-02-01

    The severity of the double Intertropical Convergence Zone (DI) problem in climate models can be measured by a tropical precipitation asymmetry index (PAI), indicating whether tropical precipitation favors the Northern Hemisphere or the Southern Hemisphere. Examination of 19 Coupled Model Intercomparison Project phase 5 models reveals that the PAI is tightly linked to the tropical sea surface temperature (SST) bias. As one of the factors determining the SST bias, the asymmetry of tropical net surface heat flux in Atmospheric Model Intercomparison Project (AMIP) simulations is identified as a skillful predictor of the PAI change from an AMIP to a coupled simulation, with an intermodel correlation of 0.90. Using tropical top-of-atmosphere (TOA) fluxes, the correlations are lower but still strong. However, the extratropical asymmetries of surface and TOA fluxes in AMIP simulations cannot serve as useful predictors of the PAI change. This study suggests that the largest source of the DI bias is from the tropics and from atmospheric models.

  5. A realistic model of neutrino masses with a large neutrinoless double beta decay rate

    NASA Astrophysics Data System (ADS)

    del Aguila, Francisco; Aparici, Alberto; Bhattacharya, Subhaditya; Santamaria, Arcadi; Wudka, Jose

    2012-05-01

    The minimal Standard Model extension with the Weinberg operator does accommodate the observed neutrino masses and mixing, but predicts a neutrinoless double beta (0 νββ) decay rate proportional to the effective electron neutrino mass, which can be then arbitrarily small within present experimental limits. However, in general 0 νββ decay can have an independent origin and be near its present experimental bound; whereas neutrino masses are generated radiatively, contributing negligibly to 0 νββ decay. We provide a realization of this scenario in a simple, well defined and testable model, with potential LHC effects and calculable neutrino masses, whose two-loop expression we derive exactly. We also discuss the connection of this model to others that have appeared in the literature, and remark on the significant differences that result from various choices of quantum number assignments and symmetry assumptions. In this type of models lepton flavor violating rates are also preferred to be relatively large, at the reach of foreseen experiments. Interestingly enough, in our model this stands for a large third mixing angle, {{si}}{{{n}}^{{2}}}{θ_{{{13}}}}{˜}}}{ > }}0.00{8} , when μ→ eee is required to lie below its present experimental limit.

  6. The doubled CO2 climate and the sensitivity of the modeled hydrologic cycle

    NASA Technical Reports Server (NTRS)

    Rind, D.

    1988-01-01

    Four doubled CO2 experiments with the GISS general circulation model are compared to investigate the consistency of changes in water availability over the United States. The experiments compare the influence of model sensitivity, model resolution, and the sea-surface temperature gradient. The results show that the general mid-latitude drying over land is dependent upon the degree of mid-latitude eddy energy decrease, and thus the degree of high-latitude temperature change amplification. There is a general tendency in the experiments for the northern and western United States to become wetter, while the southern and eastern portions dry. However, there is much variability from run to run, with different regions showing different degrees of sensitivity to the parameters tested. The results for the western United States depend most on model resolution; those for the central United States, on the sea-surface temperature gradient and the degree of mid-latitude ocean warming; and those for the eastern United States, on model sensitivity. The changes in particular seasons depend on changes in other seasons, and will therefore be sensitive to the realism of the ground hydrology parameterization.

  7. Deformed shell model results for neutrinoless double beta decay of nuclei in A = 60 - 90 region

    NASA Astrophysics Data System (ADS)

    Sahu, R.; Kota, V. K. B.

    2015-03-01

    Nuclear transition matrix elements (NTME) for the neutrinoless double beta decay (Oνββ or OνDBD) of 70Zn, 80Se and 82Se nuclei are calculated within the framework of the deformed shell model (DSM) based on Hartree-Fock (HF) states. For 70Zn, jj44b interaction in 2p3/2, 1f5/2, 2p1/2 and 1g9/2 space with 56Ni as the core is employed. However, for 80Se and 82Se nuclei, a modified Kuo interaction with the above core and model space are employed. Most of our calculations in this region were performed with this effective interaction. However, jj44b interaction has been found to be better for 70Zn. The above model space was used in many recent shell model (SM) and interacting boson model (IBM) calculations for nuclei in this region. After ensuring that DSM gives good description of the spectroscopic properties of low-lying levels in these three nuclei considered, the NTME are calculated. The deduced half-lives with these NTME, assuming neutrino mass is 1 eV, are 1.1 × 1026, 2.3 × 1027 and 2.2 × 1024 yr for 70Zn, 80Se and 82Se, respectively.

  8. Hyperpolarization-Activated Current Induces Period-Doubling Cascades and Chaos in a Cold Thermoreceptor Model

    PubMed Central

    Xu, Kesheng; Maidana, Jean P.; Caviedes, Mauricio; Quero, Daniel; Aguirre, Pablo; Orio, Patricio

    2017-01-01

    In this article, we describe and analyze the chaotic behavior of a conductance-based neuronal bursting model. This is a model with a reduced number of variables, yet it retains biophysical plausibility. Inspired by the activity of cold thermoreceptors, the model contains a persistent Sodium current, a Calcium-activated Potassium current and a hyperpolarization-activated current (Ih) that drive a slow subthreshold oscillation. Driven by this oscillation, a fast subsystem (fast Sodium and Potassium currents) fires action potentials in a periodic fashion. Depending on the parameters, this model can generate a variety of firing patterns that includes bursting, regular tonic and polymodal firing. Here we show that the transitions between different firing patterns are often accompanied by a range of chaotic firing, as suggested by an irregular, non-periodic firing pattern. To confirm this, we measure the maximum Lyapunov exponent of the voltage trajectories, and the Lyapunov exponent and Lempel-Ziv's complexity of the ISI time series. The four-variable slow system (without spiking) also generates chaotic behavior, and bifurcation analysis shows that this is often originated by period doubling cascades. Either with or without spikes, chaos is no longer generated when the Ih is removed from the system. As the model is biologically plausible with biophysically meaningful parameters, we propose it as a useful tool to understand chaotic dynamics in neurons. PMID:28344550

  9. A GENERAL CIRCULATION MODEL FOR GASEOUS EXOPLANETS WITH DOUBLE-GRAY RADIATIVE TRANSFER

    SciTech Connect

    Rauscher, Emily; Menou, Kristen

    2012-05-10

    We present a new version of our code for modeling the atmospheric circulation on gaseous exoplanets, now employing a 'double-gray' radiative transfer scheme, which self-consistently solves for fluxes and heating throughout the atmosphere, including the emerging (observable) infrared flux. We separate the radiation into infrared and optical components, each with its own absorption coefficient, and solve standard two-stream radiative transfer equations. We use a constant optical absorption coefficient, while the infrared coefficient can scale as a power law with pressure; however, for simplicity, the results shown in this paper use a constant infrared coefficient. Here we describe our new code in detail and demonstrate its utility by presenting a generic hot Jupiter model. We discuss issues related to modeling the deepest pressures of the atmosphere and describe our use of the diffusion approximation for radiative fluxes at high optical depths. In addition, we present new models using a simple form for magnetic drag on the atmosphere. We calculate emitted thermal phase curves and find that our drag-free model has the brightest region of the atmosphere offset by {approx}12 Degree-Sign from the substellar point and a minimum flux that is 17% of the maximum, while the model with the strongest magnetic drag has an offset of only {approx}2 Degree-Sign and a ratio of 13%. Finally, we calculate rates of numerical loss of kinetic energy at {approx}15% for every model except for our strong-drag model, where there is no measurable loss; we speculate that this is due to the much decreased wind speeds in that model.

  10. On the Critical Behavior of Hermitian f-MATRIX Models in the Double Scaling Limit with f ≥ 3

    NASA Astrophysics Data System (ADS)

    Balaska, S.; Maeder, J.; Rühl, W.

    An algorithm for the isolation of any singularity of f-matrix models in the double scaling limit is presented. In particular it is proved by construction that only those universality classes exist that are known from two-matrix models.

  11. Validity of the "thin" and "thick" double-layer assumptions to model streaming currents in porous media

    NASA Astrophysics Data System (ADS)

    Leinov, E.; Jackson, M.

    2012-12-01

    Measurements of the streaming potential component of the spontaneous potential have been used to characterize groundwater flow and subsurface hydraulic properties in numerous studies. Streaming potentials in porous media arise from the electrical double layer which forms at solid-fluid interfaces. The solid surfaces typically become electrically charged, in which case an excess of counter-charge accumulates in the adjacent fluid. If the fluid is induced to flow by an external pressure gradient, then some of the excess charge within the diffuse part of the double layer is transported with the flow, giving rise to a streaming current. Divergence of the streaming current density establishes an electrical potential, termed the streaming potential. Within the diffuse layer, the Poisson-Boltzmann equation is typically used to describe the variation in electrical potential with distance from the solid surface. In many subsurface settings, it is reasonable to assume that the thickness of the diffuse layer is small compared to the pore radius. This is the so-called 'thin double layer assumption', which has been invoked by numerous authors to model streaming potentials in porous media. However, a number of recent papers have proposed a different approach, in which the thickness of the diffuse layer is assumed to be large compared to the pore radius. This is the so-called 'thick double layer assumption' in which the excess charge density within the pore is assumed to be constant and independent of distance from the solid surface. The advantage of both the 'thin' and 'thick' double layer assumptions is that calculation of the streaming current is greatly simplified. However, perhaps surprisingly, the conditions for which these assumptions are valid have not been determined quantitatively, yet they have a significant impact on the interpretation of streaming potential measurements in natural systems. We use a simple capillary tubes to model investigate the validity of the thin

  12. Modeling of High-Frequency Noise in III-V Double-Gate HFETs

    NASA Astrophysics Data System (ADS)

    Vasallo, B. G.

    2009-04-01

    In this paper, we present a review of recent results on Monte Carlo modeling of high-frequency noise in III-V four-terminal devices. In particular, a study of the noise behavior of InAlAs/InGaAs Double-Gate High Electron Mobility Transistors (DG-HEMTs), operating in common mode, and Velocity Modulation Transistors (VMT), operating in differential mode, has been performed taking as a reference a similar standard HEMT. In the DG-HEMT, the intrinsic P, R and C parameters show a modest improvement, but the extrinsic minimum noise figure NFmin reveals a significantly better extrinsic noise performance due to the lower resistances of the gate contact and the source and drain accesses. In the VMT, very high values of P are obtained since the transconductance is very small, while the differential-mode operation leads to extremely low values of R.

  13. Critical two-dimensional Ising model with free, fixed ferromagnetic, fixed antiferromagnetic, and double antiferromagnetic boundaries.

    PubMed

    Wu, Xintian; Izmailyan, Nickolay

    2015-01-01

    The critical two-dimensional Ising model is studied with four types boundary conditions: free, fixed ferromagnetic, fixed antiferromagnetic, and fixed double antiferromagnetic. Using bond propagation algorithms with surface fields, we obtain the free energy, internal energy, and specific heat numerically on square lattices with a square shape and various combinations of the four types of boundary conditions. The calculations are carried out on the square lattices with size N×N and 30

  14. Effective field study of ising model on a double perovskite structure

    NASA Astrophysics Data System (ADS)

    Ngantso, G. Dimitri; El Amraoui, Y.; Benyoussef, A.; El Kenz, A.

    2017-02-01

    By using the effective field theory (EFT), the mixed spin-1/2 and spin-3/2 Ising ferrimagnetic model adapted to a double perovskite structure has been studied. The EFT calculations have been carried out from Ising Hamiltonian by taking into account first and second nearest-neighbors interactions and the crystal and external magnetic fields. Both first- and second-order phase transitions have been found in phase diagrams of interest. Depending on crystal-field values, the thermodynamic behavior of total magnetization indicated the compensation phenomenon existence. The hysteresis behaviors are studied by investigating the reduced magnetic field dependence of total magnetization and a series of hysteresis loops are shown for different reduced temperatures around the critical one.

  15. A novel double loop control model design for chemical unstable processes.

    PubMed

    Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He

    2014-03-01

    In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods.

  16. Prediction of orbiter RSI tile gap heating ratios from NASA/Ames double wedge model test

    NASA Technical Reports Server (NTRS)

    1978-01-01

    In-depth gap heating ratios for Orbiter RSI tile sidewalls were predicted based on near steady state temperature measurements obtained from double wedge model tests. An analysis was performed to derive gap heating ratios which would result in the best fit of test data; provide an assessment of open gap response, and supply the definition of gap filler requirements on the Orbiter. A comparison was made of these heating ratios with previously derived ratios in order to verify the extrapolation of the wing glove data to Orbiter flight conditions. The analysis was performed with the Rockwell TPS Multidimensional Heat Conduction Program for a 3-D, 2.0-inch thick flat RSI tile with 255 nodal points. The data from 14 tests was used to correlate with the analysis. The results show that the best-fit heating ratios at the station farthest upstream on the model for most gap depths were less than the extrapolated values of the wing glove model heating ratios. For the station farthest downstream on the model, the baseline heating ratios adequately predicted or over-predicted the test data.

  17. Double Higgs boson production and decay in Randall-Sundrum model at hadron colliders

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Juan; Ma, Wen-Gan; Zhang, Ren-You; Li, Xiao-Zhou; Guo, Lei; Chen, Chong

    2015-12-01

    We investigate the double Higgs production and decay at the 14 TeV LHC and 33 TeV HE-LHC in both the standard model (SM) and Randall-Sundrum (RS) model. In our calculation we consider reasonably only the contribution of the lightest two Kaluza-Klein (KK) gravitons. We present the integrated cross sections and some kinematic distributions in both models. Our results show that the RS effect in the vicinities of MH H˜M1, M2 (the masses of the lightest two KK gravitons) or in the central Higgs rapidity region is quite significant, and can be extracted from the heavy SM background by imposing proper kinematic cuts on final particles. We also study the dependence of the cross section on the RS model parameters, the first KK graviton mass M1, and the effective coupling c0, and find that the RS effect is reduced obviously with the increment of M1 or decrement of c0.

  18. A Test of the Double-Strand Break Repair Model for Meiotic Recombination in Saccharomyces Cerevisiae

    PubMed Central

    Gilbertson, L. A.; Stahl, F. W.

    1996-01-01

    We tested predictions of the double-strand break repair (DSBR) model for meiotic recombination by examining the segregation patterns of small palindromic insertions, which frequently escape mismatch repair when in heteroduplex DNA. The palindromes flanked a well characterized DSB site at the ARG4 locus. The ``canonical'' DSBR model, in which only 5' ends are degraded and resolution of the four-stranded intermediate is by Holliday junction resolvase, predicts that hDNA will frequently occur on both participating chromatids in a single event. Tetrads reflecting this configuration of hDNA were rare. In addition, a class of tetrads not predicted by the canonical DSBR model was identified. This class represented events that produced hDNA in a ``trans'' configuration, on opposite strands of the same duplex on the two sides of the DSB site. Whereas most classes of convertant tetrads had typical frequencies of associated crossovers, tetrads with trans hDNA were parental for flanking markers. Modified versions of the DSBR model, including one that uses a topoisomerase to resolve the canonical DSBR intermediate, are supported by these data. PMID:8878671

  19. A Double-Canyon Radiation Scheme for Multi-Layer Urban Canopy Models

    NASA Astrophysics Data System (ADS)

    Schubert, Sebastian; Grossman-Clarke, Susanne; Martilli, Alberto

    2012-12-01

    We develop a double-canyon radiation scheme (DCEP) for urban canopy models embedded in mesoscale numerical models based on the Building Effect Parametrization (BEP). The new scheme calculates the incoming and outgoing longwave and shortwave radiation for roof, wall and ground surfaces for an urban street canyon characterized by its street and building width, canyon length, and the building height distribution. The scheme introduces the radiative interaction of two neighbouring urban canyons allowing the full inclusion of roofs into the radiation exchange both inside the canyon and with the sky. In contrast to BEP, we also treat direct and diffuse shortwave radiation from the sky independently, thus allowing calculation of the effective parameters representing the urban diffuse and direct shortwave radiation budget inside the mesoscale model. Furthermore, we close the energy balance of incoming longwave and diffuse shortwave radiation from the sky, so that the new scheme is physically more consistent than the BEP scheme. Sensitivity tests show that these modifications are important for urban regions with a large variety of building heights. The evaluation against data from the Basel Urban Boundary Layer Experiment indicates a good performance of the DCEP when coupled with the regional weather and climate model COSMO-CLM.

  20. Nursing research on a first aid model of double personnel for major burn patients.

    PubMed

    Wu, Weiwei; Shi, Kai; Jin, Zhenghua; Liu, Shuang; Cai, Duo; Zhao, Jingchun; Chi, Cheng; Yu, Jiaao

    2015-03-01

    This study explored the effect of a first aid model employing two nurses on the efficient rescue operation time and the efficient resuscitation time for major burn patients. A two-nurse model of first aid was designed for major burn patients. The model includes a division of labor between the first aid nurses and the re-organization of emergency carts. The clinical effectiveness of the process was examined in a retrospective chart review of 156 cases of major burn patients, experiencing shock and low blood volume, who were admitted to the intensive care unit of the department of burn surgery between November 2009 and June 2013. Of the 156 major burn cases, 87 patients who received first aid using the double personnel model were assigned to the test group and the 69 patients who received first aid using the standard first aid model were assigned to the control group. The efficient rescue operation time and the efficient resuscitation time for the patients were compared between the two groups. Student's t tests were used to the compare the mean difference between the groups. Statistically significant differences between the two groups were found on both measures (P's < 0.05), with the test group having lower times than the control group. The efficient rescue operation time was 14.90 ± 3.31 min in the test group and 30.42 ± 5.65 min in the control group. The efficient resuscitation time was 7.4 ± 3.2 h in the test group and 9.5 ± 2.7 h in the control group. A two-nurse first aid model based on scientifically validated procedures and a reasonable division of labor can shorten the efficient rescue operation time and the efficient resuscitation time for major burn patients. Given these findings, the model appears to be worthy of clinical application.

  1. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  2. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  3. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  4. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    NASA Astrophysics Data System (ADS)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  5. The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer with non-equilibrium model.

    PubMed

    Yang, Zhixin; Wang, Shaowei; Zhao, Moli; Li, Shucai; Zhang, Qiangyong

    2013-01-01

    The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer is studied when the fluid and solid phase are not in local thermal equilibrium. The modified Darcy model is used for the momentum equation and a two-field model is used for energy equation each representing the fluid and solid phases separately. The effect of thermal non-equilibrium on the onset of double diffusive convection is discussed. The critical Rayleigh number and the corresponding wave number for the exchange of stability and over-stability are obtained, and the onset criterion for stationary and oscillatory convection is derived analytically and discussed numerically.

  6. Resolving the double tension: Toward a new approach to measurement modeling in cross-national research

    NASA Astrophysics Data System (ADS)

    Medina, Tait Runnfeldt

    The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and

  7. Lagrange-type modeling of continuous dielectric permittivity variation in double-higher-order volume integral equation method

    NASA Astrophysics Data System (ADS)

    Chobanyan, E.; Ilić, M. M.; Notaroš, B. M.

    2015-05-01

    A novel double-higher-order entire-domain volume integral equation (VIE) technique for efficient analysis of electromagnetic structures with continuously inhomogeneous dielectric materials is presented. The technique takes advantage of large curved hexahedral discretization elements—enabled by double-higher-order modeling (higher-order modeling of both the geometry and the current)—in applications involving highly inhomogeneous dielectric bodies. Lagrange-type modeling of an arbitrary continuous variation of the equivalent complex permittivity of the dielectric throughout each VIE geometrical element is implemented, in place of piecewise homogeneous approximate models of the inhomogeneous structures. The technique combines the features of the previous double-higher-order piecewise homogeneous VIE method and continuously inhomogeneous finite element method (FEM). This appears to be the first implementation and demonstration of a VIE method with double-higher-order discretization elements and conformal modeling of inhomogeneous dielectric materials embedded within elements that are also higher (arbitrary) order (with arbitrary material-representation orders within each curved and large VIE element). The new technique is validated and evaluated by comparisons with a continuously inhomogeneous double-higher-order FEM technique, a piecewise homogeneous version of the double-higher-order VIE technique, and a commercial piecewise homogeneous FEM code. The examples include two real-world applications involving continuously inhomogeneous permittivity profiles: scattering from an egg-shaped melting hailstone and near-field analysis of a Luneburg lens, illuminated by a corrugated horn antenna. The results show that the new technique is more efficient and ensures considerable reductions in the number of unknowns and computational time when compared to the three alternative approaches.

  8. Global/Regional Integrated Model System (GRIMs): Double Fourier Series (DFS) Dynamical Core

    NASA Astrophysics Data System (ADS)

    Koo, M.; Hong, S.

    2013-12-01

    A multi-scale atmospheric/oceanic model system with unified physics, the Global/Regional Integrated Model system (GRIMs) has been created for use in numerical weather prediction, seasonal simulations, and climate research projects, from global to regional scales. It includes not only the model code, but also the test cases and scripts. The model system is developed and practiced by taking advantage of both operational and research applications. We outlines the history of GRIMs, its current applications, and plans for future development, providing a summary useful to present and future users. In addition to the traditional spherical harmonics (SPH) dynamical core, a new spectral method with a double Fourier series (DFS) is available in the GRIMs (Table 1). The new DFS dynamical core with full physics is evaluated against the SPH dynamical core in terms of short-range forecast capability for a heavy rainfall event and seasonal simulation framework. Comparison of the two dynamical cores demonstrates that the new DFS dynamical core exhibits performance comparable to the SPH in terms of simulated climatology accuracy and the forecast of a heavy rainfall event. Most importantly, the DFS algorithm guarantees improved computational efficiency in the cluster computer as the model resolution increases, which is consistent with theoretical values computed from the dry primitive equation model framework of Cheong (Fig. 1). The current study shows that, at higher resolutions, the DFS approach can be a competitive dynamical core because the DFS algorithm provides the advantages of both the spectral method for high numerical accuracy and the grid-point method for high performance computing in the aspect of computational cost. GRIMs dynamical cores

  9. The double nucleation model for sickle cell haemoglobin polymerization: full integration and comparison with experimental data.

    PubMed

    Medkour, Terkia; Ferrone, Frank; Galactéros, Frédéric; Hannaert, Patrick

    2008-06-01

    Sickle cell haemoglobin (HbS) polymerization reduces erythrocyte deformability, causing deleterous vaso-occlusions. The double-nucleation model states that polymers grow from HbS aggregates, the nuclei, (i) in solution (homogeneous nucleation), (ii) onto existing polymers (heterogeneous nucleation). When linearized at initial HbS concentration, this model predicts early polymerization and its characteristic delay-time (Ferrone et al. J Mol Biol 183(4):591-610, 611-631, 1985). Addressing its relevance for describing complete polymerization, we constructed the full, non-linearized model (Simulink), The MathWorks). Here, we compare the simulated outputs to experimental progress curves (n = 6-8 different [HbS], 3-6 mM range, from Ferrone's group). Within 10% from start, average root mean square (rms) deviation between simulated and experimental curves is 0.04 +/- 0.01 (25 degrees C, n = 8; mean +/- standard error). Conversely, for complete progress curves, averaged rms is 0.48 +/- 0.04. This figure is improved to 0.13 +/- 0.01 by adjusting heterogeneous pathway parameters (p < 0.01): the nucleus stability (sigma(2) micro( cc ): + 40%), and the fraction of polymer surface available for nucleation (phi), from 5e(-7), (3 mM) to 13 (6 mM). Similar results are obtained at 37 degrees C. We conclude that the physico-chemical description of heterogeneous nucleation warrants refinements in order to capture the whole HbS polymerization process.

  10. Modeling of structure of double-phase low-carbon chromium steels

    NASA Astrophysics Data System (ADS)

    Zolotarevskii, N. Yu.; Titovets, Yu. F.; Samoilov, A. N.; Hribernig, G.; Pichler, A.

    2007-01-01

    A physical model for determining the relative amount of phase components and the size of ferrite grains after decomposition of austenite in the process of cooling of double-phase steels is suggested. The main products of austenite transformation, i.e., polygonal ferrite, pearlite, bainite, and martensite, are considered. The driving forces of the transformation and the concentration of carbon on the phase surface are determined with the use of methods of computational thermodynamics. The model is based on equations of the classical theory of nucleation and growth. It allows for the structural features of the occurrence of γ → α transformation and contain some empirical parameters. The latter are determined using data of dilatometric measurements of the kinetics of austenite transformation and metallographic measurements of the size of ferrite grains. The model is used for predicting the kinetics of the transformation under the complex cooling conditions implemented by the VOEST-ALPINE STAHL LINZ GmbH rolling mill within the computer system for control of mechanical properties of hot-rolled strip.

  11. Comparison of two models of a double inlet miniature pulse tube refrigerator: Part A thermodynamics

    NASA Astrophysics Data System (ADS)

    Nika, Philippe; Bailly, Yannick

    2002-10-01

    The cooling of electronic components is of great interest to improve their capabilities, especially for CMOS components or infrared sensors. The purpose of this paper is to present the design and the optimization of a miniature double inlet pulse tube refrigerator (DIPTR) dedicated to such applications. Special precautions have to be considered in modeling the global functioning of small scale DIPTR systems and also in estimating the net cooling power. In fact, thermal gradients are greater than those observed in normal scale systems, and moreover, because of the small dimensions of ducts (diameter), the pulse tube cannot be assumed to be adiabatic. Hence thermal heat conduction phenomena must be considered. Besides dead volumes introduced by junctions and capillaries cannot be neglected any more in front of the volume of the gas tube itself. The hydrodynamic and thermal behaviors of the cooler are predicted by means of two different approaches: a classical thermodynamic model and a model based on an electrical analogy. The results of these analysis are tested and criticized by comparing them with experimental data obtained on a small commercial pulse tube refrigerator.

  12. Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks

    PubMed Central

    Fu, Jun-Song; Liu, Yun

    2015-01-01

    Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211

  13. Bioprinting of 3D Convoluted Renal Proximal Tubules on Perfusable Chips

    PubMed Central

    Homan, Kimberly A.; Kolesky, David B.; Skylar-Scott, Mark A.; Herrmann, Jessica; Obuobi, Humphrey; Moisan, Annie; Lewis, Jennifer A.

    2016-01-01

    Three-dimensional models of kidney tissue that recapitulate human responses are needed for drug screening, disease modeling, and, ultimately, kidney organ engineering. Here, we report a bioprinting method for creating 3D human renal proximal tubules in vitro that are fully embedded within an extracellular matrix and housed in perfusable tissue chips, allowing them to be maintained for greater than two months. Their convoluted tubular architecture is circumscribed by proximal tubule epithelial cells and actively perfused through the open lumen. These engineered 3D proximal tubules on chip exhibit significantly enhanced epithelial morphology and functional properties relative to the same cells grown on 2D controls with or without perfusion. Upon introducing the nephrotoxin, Cyclosporine A, the epithelial barrier is disrupted in a dose-dependent manner. Our bioprinting method provides a new route for programmably fabricating advanced human kidney tissue models on demand. PMID:27725720

  14. Bioprinting of 3D Convoluted Renal Proximal Tubules on Perfusable Chips

    NASA Astrophysics Data System (ADS)

    Homan, Kimberly A.; Kolesky, David B.; Skylar-Scott, Mark A.; Herrmann, Jessica; Obuobi, Humphrey; Moisan, Annie; Lewis, Jennifer A.

    2016-10-01

    Three-dimensional models of kidney tissue that recapitulate human responses are needed for drug screening, disease modeling, and, ultimately, kidney organ engineering. Here, we report a bioprinting method for creating 3D human renal proximal tubules in vitro that are fully embedded within an extracellular matrix and housed in perfusable tissue chips, allowing them to be maintained for greater than two months. Their convoluted tubular architecture is circumscribed by proximal tubule epithelial cells and actively perfused through the open lumen. These engineered 3D proximal tubules on chip exhibit significantly enhanced epithelial morphology and functional properties relative to the same cells grown on 2D controls with or without perfusion. Upon introducing the nephrotoxin, Cyclosporine A, the epithelial barrier is disrupted in a dose-dependent manner. Our bioprinting method provides a new route for programmably fabricating advanced human kidney tissue models on demand.

  15. Analytical model for random dopant fluctuation in double-gate MOSFET in the subthreshold region using macroscopic modeling method

    NASA Astrophysics Data System (ADS)

    Shin, Yong Hyeon; Yun, Ilgu

    2016-12-01

    An analytical model is proposed for the random dopant fluctuation (RDF) in a symmetric double-gate metal-oxidesemiconductor field-effect-transistor (DG MOSFET) in the subthreshold region. Unintended impurity dopants cannot be absolutely prevented during the device fabrication; hence, it is important to analytically model the fluctuations in the electrical characteristics caused by these impurity dopants. Therefore, a macroscopic modeling method is applied to represent the impurity dopants in DG MOSFETs. With this method, the two-dimensional (2D) Poisson equation is separated into a basic analytical DG MOSFET model with channel doping concentration NA and an impurity-dopant-related term with local doping concentration NRD confined in a specific rectangular area. To solve the second term, the manually solvable 2D Green's function for DG MOSFETs is used. Through calculation of the channel potential (ϕ(x, y)), the variations in the drive current (IDS) and threshold voltage (Vth) are extracted from the analytical model. All results from the analytical model for an impurity dopant in a DG MOSFET are examined by comparisons with the commercially available 2D numerical simulation results, with respect to various oxide thicknesses (tox), channel lengths (L), and location of impurity dopants.

  16. Operational and convolution properties of three-dimensional Fourier transforms in spherical polar coordinates.

    PubMed

    Baddour, Natalie

    2010-10-01

    For functions that are best described with spherical coordinates, the three-dimensional Fourier transform can be written in spherical coordinates as a combination of spherical Hankel transforms and spherical harmonic series. However, to be as useful as its Cartesian counterpart, a spherical version of the Fourier operational toolset is required for the standard operations of shift, multiplication, convolution, etc. This paper derives the spherical version of the standard Fourier operation toolset. In particular, convolution in various forms is discussed in detail as this has important consequences for filtering. It is shown that standard multiplication and convolution rules do apply as long as the correct definition of convolution is applied.

  17. Slow rise and partial eruption of a double-decker filament. II. A double flux rope model

    SciTech Connect

    Kliem, Bernhard; Török, Tibor; Titov, Viacheslav S.; Lionello, Roberto; Linker, Jon A.; Liu, Rui; Liu, Chang; Wang, Haimin

    2014-09-10

    Force-free equilibria containing two vertically arranged magnetic flux ropes of like chirality and current direction are considered as a model for split filaments/prominences and filament-sigmoid systems. Such equilibria are constructed analytically through an extension of the methods developed in Titov and Démoulin and numerically through an evolutionary sequence including shear flows, flux emergence, and flux cancellation in the photospheric boundary. It is demonstrated that the analytical equilibria are stable if an external toroidal (shear) field component exceeding a threshold value is included. If this component decreases sufficiently, then both flux ropes turn unstable for conditions typical of solar active regions, with the lower rope typically becoming unstable first. Either both flux ropes erupt upward, or only the upper rope erupts while the lower rope reconnects with the ambient flux low in the corona and is destroyed. However, for shear field strengths staying somewhat above the threshold value, the configuration also admits evolutions which lead to partial eruptions with only the upper flux rope becoming unstable and the lower one remaining in place. This can be triggered by a transfer of flux and current from the lower to the upper rope, as suggested by the observations of a split filament in Paper I. It can also result from tether-cutting reconnection with the ambient flux at the X-type structure between the flux ropes, which similarly influences their stability properties in opposite ways. This is demonstrated for the numerically constructed equilibrium.

  18. Convolution Algebra for Fluid Modes with Finite Energy

    DTIC Science & Technology

    1992-04-01

    PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE HANSCOM AIR FORCE BASE, MASSACHIUSETTS 01731-5000 94-22604 "This technical report ’-as...with finite spatial and temporal extents. At Boston University, we have developed a full form of wavelet expansion which has the advantage over more...distribution: 00 bX =00 0l if, TZ< VPf (X) = V •a,,,’(x) = E bnb 𔄀(x) where b, =otherwise (34) V=o ,i=o a._, otherwise 7 The convolution of two

  19. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  20. Convolution seal for transition duct in turbine system

    DOEpatents

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  1. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  2. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  3. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  4. Convolution seal for transition duct in turbine system

    DOEpatents

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  5. Convolutional neural networks for synthetic aperture radar classification

    NASA Astrophysics Data System (ADS)

    Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott

    2016-05-01

    For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.

  6. Repairing a double-strand chromosome break by homologous recombination: revisiting Robin Holliday's model.

    PubMed Central

    Haber, James E; Ira, Gregorz; Malkova, Anna; Sugawara, Neal

    2004-01-01

    Since the pioneering model for homologous recombination proposed by Robin Holliday in 1964, there has been great progress in understanding how recombination occurs at a molecular level. In the budding yeast Saccharomyces cerevisiae, one can follow recombination by physically monitoring DNA after the synchronous induction of a double-strand break (DSB) in both wild-type and mutant cells. A particularly well-studied system has been the switching of yeast mating-type (MAT) genes, where a DSB can be induced synchronously by expression of the site-specific HO endonuclease. Similar studies can be performed in meiotic cells, where DSBs are created by the Spo11 nuclease. There appear to be at least two competing mechanisms of homologous recombination: a synthesis-dependent strand annealing pathway leading to noncrossovers and a two-end strand invasion mechanism leading to formation and resolution of Holliday junctions (HJs), leading to crossovers. The establishment of a modified replication fork during DSB repair links gene conversion to another important repair process, break-induced replication. Despite recent revelations, almost 40 years after Holliday's model was published, the essential ideas he proposed of strand invasion and heteroduplex DNA formation, the formation and resolution of HJs, and mismatch repair, remain the basis of our thinking. PMID:15065659

  7. Dynamic modeling and simulation of an integral bipropellant propulsion double-valve combined test system

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Wang, Huasheng; Xia, Jixia; Cai, Guobiao; Zhang, Zhenpeng

    2017-04-01

    For the pressure reducing regulator and check valve double-valve combined test system in an integral bipropellant propulsion system, a system model is established with modular models of various typical components. The simulation research is conducted on the whole working process of an experiment of 9 MPa working condition from startup to rated working condition and finally to shutdown. Comparison of simulation results with test data shows: five working conditions including standby, startup, rated pressurization, shutdown and halt and nine stages of the combined test system are comprehensively disclosed; valve-spool opening and closing details of the regulator and two check valves are accurately revealed; the simulation also clarifies two phenomena which test data are unable to clarify, one is the critical opening state in which the check valve spools slightly open and close alternately in their own fully closed positions, the other is the obvious effects of flow-field temperature drop and temperature rise in pipeline network with helium gas flowing. Moreover, simulation results with consideration of component wall heat transfer are closer to the test data than those under the adiabatic-wall condition, and more able to reveal the dynamic characteristics of the system in various working stages.

  8. The Double Layer Methodology and the Validation of Eigenbehavior Techniques Applied to Lifestyle Modeling

    PubMed Central

    Lamichhane, Bishal

    2017-01-01

    A novel methodology, the double layer methodology (DLM), for modeling an individual's lifestyle and its relationships with health indicators is presented. The DLM is applied to model behavioral routines emerging from self-reports of daily diet and activities, annotated by 21 healthy subjects over 2 weeks. Unsupervised clustering on the first layer of the DLM separated our population into two groups. Using eigendecomposition techniques on the second layer of the DLM, we could find activity and diet routines, predict behaviors in a portion of the day (with an accuracy of 88% for diet and 66% for activity), determine between day and between individual similarities, and detect individual's belonging to a group based on behavior (with an accuracy up to 64%). We found that clustering based on health indicators was mapped back into activity behaviors, but not into diet behaviors. In addition, we showed the limitations of eigendecomposition for lifestyle applications, in particular when applied to noisy and sparse behavioral data such as dietary information. Finally, we proposed the use of the DLM for supporting adaptive and personalized recommender systems for stimulating behavior change. PMID:28133607

  9. Repairing a double-strand chromosome break by homologous recombination: revisiting Robin Holliday's model.

    PubMed

    Haber, James E; Ira, Gregorz; Malkova, Anna; Sugawara, Neal

    2004-01-29

    Since the pioneering model for homologous recombination proposed by Robin Holliday in 1964, there has been great progress in understanding how recombination occurs at a molecular level. In the budding yeast Saccharomyces cerevisiae, one can follow recombination by physically monitoring DNA after the synchronous induction of a double-strand break (DSB) in both wild-type and mutant cells. A particularly well-studied system has been the switching of yeast mating-type (MAT) genes, where a DSB can be induced synchronously by expression of the site-specific HO endonuclease. Similar studies can be performed in meiotic cells, where DSBs are created by the Spo11 nuclease. There appear to be at least two competing mechanisms of homologous recombination: a synthesis-dependent strand annealing pathway leading to noncrossovers and a two-end strand invasion mechanism leading to formation and resolution of Holliday junctions (HJs), leading to crossovers. The establishment of a modified replication fork during DSB repair links gene conversion to another important repair process, break-induced replication. Despite recent revelations, almost 40 years after Holliday's model was published, the essential ideas he proposed of strand invasion and heteroduplex DNA formation, the formation and resolution of HJs, and mismatch repair, remain the basis of our thinking.

  10. A model intercomparison of the tropical precipitation response to a CO2 doubling in aquaplanet simulations

    NASA Astrophysics Data System (ADS)

    Seo, Jeongbin; Kang, Sarah M.; Merlis, Timothy M.

    2017-01-01

    In the present-day climate, the mean Intertropical Convergence Zone (ITCZ) is north of the equator. We investigate changes in the ITCZ latitude under global warming, using multiple atmospheric models coupled to an aquaplanet slab ocean. The reference climate, with a warmer north from prescribed ocean heating, is perturbed by doubling CO2. Most models exhibit a northward ITCZ shift, but the shift cannot be accounted for by the response of energy flux equator where the atmospheric energy transport (FA) vanishes. The energetics of the simulated circulation shifts are subtle: changes in the efficiency with which the Hadley circulation transports energy, the total gross moist stability (Δm), dominate over mass flux changes in determining δFA. Even when δFA ≈ 0, the ITCZ can shift significantly due to changes in Δm, which have often been neglected previously. The dependence of ITCZ responses on δΔm calls for improved understanding of the physics determining the tropical Δm.

  11. Bayesian Evolution Models for Jupiter with Helium Rain and Double-diffusive Convection

    NASA Astrophysics Data System (ADS)

    Mankovich, Christopher; Fortney, Jonathan J.; Moore, Kevin L.

    2016-12-01

    Hydrogen and helium demix when sufficiently cool, and this bears on the evolution of all giant planets at large separations at or below roughly a Jupiter mass. We model the thermal evolution of Jupiter, including its evolving helium distribution following results of ab initio simulations for helium immiscibility in metallic hydrogen. After 4 Gyr of homogeneous evolution, differentiation establishes a thin helium gradient below 1 Mbar that dynamically stabilizes the fluid to convection. The region undergoes overstable double-diffusive convection (ODDC), whose weak heat transport maintains a superadiabatic temperature gradient. With a generic parameterization for the ODDC efficiency, the models can reconcile Jupiter’s intrinsix flux, atmospheric helium content, and radius at the age of the solar system if the Lorenzen et al. H-He phase diagram is translated to lower temperatures. We cast the evolutionary models in an MCMC framework to explore tens of thousands of evolutionary sequences, retrieving probability distributions for the total heavy-element mass, the superadiabaticity of the temperature gradient due to ODDC, and the phase diagram perturbation. The adopted SCvH-I equation of state (EOS) favors inefficient ODDC such that a thermal boundary layer is formed, allowing the molecular envelope to cool rapidly while the deeper interior actually heats up over time. If the overall cooling time is modulated with an additional free parameter to imitate the effect of a colder or warmer EOS, the models favor those that are colder than SCvH-I. In this case the superadiabaticity is modest and warming and cooling deep interiors are equally likely.

  12. Generating double knockout mice to model genetic intervention for diabetic cardiomyopathy in humans.

    PubMed

    Chavali, Vishalakshi; Nandi, Shyam Sundar; Singh, Shree Ram; Mishra, Paras Kumar

    2014-01-01

    Diabetes is a rapidly increasing disease that enhances the chances of heart failure twofold to fourfold (as compared to age and sex matched nondiabetics) and becomes a leading cause of morbidity and mortality. There are two broad classifications of diabetes: type1 diabetes (T1D) and type2 diabetes (T2D). Several mice models mimic both T1D and T2D in humans. However, the genetic intervention to ameliorate diabetic cardiomyopathy in these mice often requires creating double knockout (DKO). In order to assess the therapeutic potential of a gene, that specific gene is either overexpressed (transgenic expression) or abrogated (knockout) in the diabetic mice. If the genetic mice model for diabetes is used, it is necessary to create DKO with transgenic/knockout of the target gene to investigate the specific role of that gene in pathological cardiac remodeling in diabetics. One of the important genes involved in extracellular matrix (ECM) remodeling in diabetes is matrix metalloproteinase-9 (Mmp9). Mmp9 is a collagenase that remains latent in healthy hearts but induced in diabetic hearts. Activated Mmp9 degrades extracellular matrix (ECM) and increases matrix turnover causing cardiac fibrosis that leads to heart failure. Insulin2 mutant (Ins2+/-) Akita is a genetic model for T1D that becomes diabetic spontaneously at the age of 3-4 weeks and show robust hyperglycemia at the age of 10-12 weeks. It is a chronic model of T1D. In Ins2+/- Akita, Mmp9 is induced. To investigate the specific role of Mmp9 in diabetic hearts, it is necessary to create diabetic mice where Mmp9 gene is deleted. Here, we describe the method to generate Ins2+/-/Mmp9-/- (DKO) mice to determine whether the abrogation of Mmp9 ameliorates diabetic cardiomyopathy.

  13. Radiative decays of double heavy baryons in a relativistic constituent three-quark model including hyperfine mixing effects

    SciTech Connect

    Branz, Tanja; Faessler, Amand; Gutsche, Thomas; Lyubovitskij, Valery E.; Oexl, Bettina; Ivanov, Mikhail A.; Koerner, Juergen G.

    2010-06-01

    We study flavor-conserving radiative decays of double-heavy baryons using a manifestly Lorentz covariant constituent three-quark model. Decay rates are calculated and compared to each other in the full theory, keeping masses finite, and also in the heavy quark limit. We discuss in some detail hyperfine mixing effects.

  14. Object class segmentation of RGB-D video using recurrent convolutional neural networks.

    PubMed

    Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven

    2017-04-01

    Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results.

  15. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction

    PubMed Central

    Quan, Chanqin

    2016-01-01

    The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task. PMID:27493967

  16. Nuclear magnetic resonance and molecular modeling study of exocyclic carbon-carbon double bond polarization in benzylidene barbiturates

    NASA Astrophysics Data System (ADS)

    Figueroa-Villar, J. Daniel; Vieira, Andreia A.

    2013-02-01

    Benzylidene barbiturates are important materials for the synthesis of heterocyclic compounds with potential for the development of new drugs. The reactivity of benzylidene barbiturates is mainly controlled by their exocyclic carbon-carbon double bond. In this work, the exocyclic double bond polarization was estimated experimentally by NMR and correlated with the Hammett σ values of the aromatic ring substituents and the molecular modeling calculated atomic charge difference. It is demonstrated that carbon chemical shift differences and NBO charge differences can be used to predict their reactivity.

  17. Convolutional neural network features based change detection in satellite images

    NASA Astrophysics Data System (ADS)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  18. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    PubMed

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.

  19. Fluence-convolution broad-beam (FCBB) dose calculation.

    PubMed

    Lu, Weiguo; Chen, Mingli

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  20. Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.

    PubMed

    Dürr, Oliver; Sick, Beate

    2016-10-01

    Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.

  1. Enhancing Neutron Beam Production with a Convoluted Moderator

    SciTech Connect

    Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  2. An optimal nonorthogonal separation of the anisotropic Gaussian convolution filter.

    PubMed

    Lampert, Christoph H; Wirjadi, Oliver

    2006-11-01

    We give an analytical and geometrical treatment of what it means to separate a Gaussian kernel along arbitrary axes in R(n), and we present a separation scheme that allows us to efficiently implement anisotropic Gaussian convolution filters for data of arbitrary dimensionality. Based on our previous analysis we show that this scheme is optimal with regard to the number of memory accesses and interpolation operations needed. The proposed method relies on nonorthogonal convolution axes and works completely in image space. Thus, it avoids the need for a fast Fourier transform (FFT)-subroutine. Depending on the accuracy and speed requirements, different interpolation schemes and methods to implement the one-dimensional Gaussian (finite impulse response and infinite impulse response) can be integrated. Special emphasis is put on analyzing the performance and accuracy of the new method. In particular, we show that without any special optimization of the source code, it can perform anisotropic Gaussian filtering faster than methods relying on the FFT.

  3. Enhanced Climatic Warming in the Tibetan Plateau Due to Double CO2: A Model Study

    NASA Technical Reports Server (NTRS)

    Chen, Baode; Chao, Winston C.; Liu, Xiao-Dong; Lau, William K. M. (Technical Monitor)

    2001-01-01

    The NCAR (National Center for Atmospheric Research) regional climate model (RegCM2) with time-dependent lateral meteorological fields provided by a 130-year transient increasing CO2 simulation of the NCAR Climate System Model (CSM) has been used to investigate the mechanism of enhanced ground temperature warming over the TP (Tibetan Plateau). From our model results, a remarkable tendency of warming increasing with elevation is found for the winter season, and elevation dependency of warming is not clearly recognized in the summer season. This simulated feature of elevation dependency of ground temperature is consistent with observations. Based on an analysis of surface energy budget, the short wave solar radiation absorbed at the surface plus downward long wave flux reaching the surface shows a strong elevation dependency, and is mostly responsible for enhanced surface warming over the TP. At lower elevations, the precipitation forced by topography is enhanced due to an increase in water vapor supply resulted from a warming in the atmosphere induced by doubling CO2. This precipitation enhancement must be associated with an increase in clouds, which results in a decline in solar flux reaching surface. At higher elevations, large snow depletion is detected in the 2xCO2run. It leads to a decrease in albedo, therefore more solar flux is absorbed at the surface. On the other hand, much more uniform increase in downward long wave flux reaching the surface is found. The combination of these effects (i.e. decrease in solar flux at lower elevations, increase in solar flux at higher elevation and more uniform increase in downward long wave flux) results in elevation dependency of enhanced ground temperature warming over the TP.

  4. Three-dimensional inspiratory flow in a double bifurcation airway model

    NASA Astrophysics Data System (ADS)

    Jalal, Sahar; Nemes, Andras; Van de Moortele, Tristan; Schmitter, Sebastian; Coletti, Filippo

    2016-09-01

    The flow in an idealized airway model is investigated for the steady inhalation case. The geometry consists of a symmetric planar double bifurcation that reflects the anatomical proportions of the human bronchial tree, and a wide range of physiologically relevant Reynolds numbers ( Re = 100-5000) is considered. Using magnetic resonance velocimetry, we analyze the three-dimensional fields of velocity and vorticity, along with flow descriptors that characterize the longitudinal and lateral dispersion. In agreement with previous studies, the symmetry of the flow partitioning is broken even at the lower Reynolds numbers, and at the second bifurcation, the fluid favors the medial branches over the lateral ones. This trend reaches a plateau around Re = 2000, above which the turbulent inflow results in smoothed mean velocity gradients. This also reduces the streamwise momentum flux, which is a measure of the longitudinal dispersion by the mean flow. The classic Dean-type counter-rotating vortices are observed in the first-generation daughter branches as a result of the local curvature. In the granddaughter branches, however, the secondary flows are determined by the local curvature only for the lower flow regimes ( Re ≤ 250), in which case the classic Dean mechanism prevails. At higher flow regimes, the field is instead dominated by streamwise vortices extending from the daughter into the medial granddaughter branches, where they rotate in the opposite direction with respect to Dean vortices. Circulation and secondary flow intensity show a similar trend as the momentum flux, increasing with Reynolds number up to Re = 2000 and then dropping due to turbulent dissipation of vorticity. The streamwise vortices interact both with each other and with the airway walls, and for Re > 500 they can become stronger in the medial granddaughter than in the upstream daughter branches. With respect to realistic airway models, the idealized geometry produces weaker secondary flows

  5. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

    SciTech Connect

    JACKSON VL

    2011-08-31

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  6. Bayesian thermal evolution models for giant planets: Helium rain and double-diffusive convection in Jupiter

    NASA Astrophysics Data System (ADS)

    Mankovich, Christopher; Fortney, Jonathan J.; Nettelmann, Nadine; Moore, Kevin

    2016-10-01

    Hydrogen and helium unmix when sufficiently cool, and this bears on the thermal evolution of all cool giant planets at or below one Jupiter mass. Over the past few years, ab initio simulations have put us in the era of quantitative predictions for this H-He immiscibility at megabar pressures. We present models for the thermal evolution of Jupiter, including its evolving helium distribution following one such ab initio H-He phase diagram. After 4 Gyr of homogeneous evolution, differentiation establishes a helium gradient between 1 and 2 Mbar that dynamically stabilizes the fluid to overturning convection. The result is a region undergoing overstable double-diffusive convection (ODDC), whose relatively weak vertical heat transport maintains a superadiabatic temperature gradient. With a general parameterization for the ODDC efficiency, the models can reconcile Jupiter's intrinsic flux, atmospheric helium content, and mean radius at the age of the solar system if the H-He phase diagram is translated to cooler temperatures.We cast our nonadiabatic thermal evolution models in a Markov chain Monte Carlo parameter estimation framework, retrieving the total heavy element mass, the superadiabaticity of the deep temperature gradient, and the phase diagram temperature offset. Models using the interpolated Saumon, Chabrier and van Horn (1995) equation of state (SCvH-I) favor very inefficient ODDC such that the deep temperature gradient is strongly superadiabatic, forming a thermal boundary layer that allows the molecular envelope to cool quickly while the deeper interior (most of the planet's mass) actually heats up over time. If we modulate the overall cooling time with an additional free parameter, mimicking the effect of a colder or warmer EOS, the models favor those that are colder than SCvH-I; this class of EOS is also favored by shock experiments. The models in this scenario have more modest deep superadiabaticities such that the envelope cools more gradually and the deep

  7. Vocal fold and ventricular fold vibration in period-doubling phonation: physiological description and aerodynamic modeling.

    PubMed

    Bailly, Lucie; Henrich, Nathalie; Pelorson, Xavier

    2010-05-01

    Occurrences of period-doubling are found in human phonation, in particular for pathological and some singing phonations such as Sardinian A Tenore Bassu vocal performance. The combined vibration of the vocal folds and the ventricular folds has been observed during the production of such low pitch bass-type sound. The present study aims to characterize the physiological correlates of this acoustical production and to provide a better understanding of the physical interaction between ventricular fold vibration and vocal fold self-sustained oscillation. The vibratory properties of the vocal folds and the ventricular folds during phonation produced by a professional singer are analyzed by means of acoustical and electroglottographic signals and by synchronized glottal images obtained by high-speed cinematography. The periodic variation in glottal cycle duration and the effect of ventricular fold closing on glottal closing time are demonstrated. Using the detected glottal and ventricular areas, the aerodynamic behavior of the laryngeal system is simulated using a simplified physical modeling previously validated in vitro using a larynx replica. An estimate of the ventricular aperture extracted from the in vivo data allows a theoretical prediction of the glottal aperture. The in vivo measurements of the glottal aperture are then compared to the simulated estimations.

  8. MULTI-DIMENSIONAL MODELS FOR DOUBLE DETONATION IN SUB-CHANDRASEKHAR MASS WHITE DWARFS

    SciTech Connect

    Moll, R.; Woosley, S. E.

    2013-09-10

    Using two-dimensional and three-dimensional simulations, we study the ''robustness'' of the double detonation scenario for Type Ia supernovae, in which a detonation in the helium shell of a carbon-oxygen white dwarf induces a secondary detonation in the underlying core. We find that a helium detonation cannot easily descend into the core unless it commences (artificially) well above the hottest layer calculated for the helium shell in current presupernova models. Compressional waves induced by the sliding helium detonation, however, robustly generate hot spots which trigger a detonation in the core. Our simulations show that this is true even for non-axisymmetric initial conditions. If the helium is ignited at multiple points, then the internal waves can pass through one another or be reflected, but this added complexity does not defeat the generation of the hot spot. The ignition of very low-mass helium shells depends on whether a thermonuclear runaway can simultaneously commence in a sufficiently large region.

  9. Modeling and Control of a Double-effect Absorption Refrigerating Machine

    NASA Astrophysics Data System (ADS)

    Hihara, Eiji; Yamamoto, Yuuji; Saito, Takamoto; Nagaoka, Yoshikazu; Nishiyama, Noriyuki

    For the purpose of impoving the response to cooling load variations and the part load characteristics, the optimal operation of a double-effect absorption refrigerating machine was investigated. The test machine was designed to be able to control energy input and weak solution flow rate continuously. It is composed of a gas-fired high-temperature generator, a separator, a low-temperature generator, an absorber, a condenser, an evaporator, and high- and low-temperature heat exchangers. The working fluid is Lithium Bromide and water solution. The standard output is 80 kW. Based on the experimental data, a simulation model of the static characteristics was developed. The experiments and simulation analysis indicate that there is an optimal weak solution flow rate which maximizes the coefficient of performance under any given cooling load condition. The optimal condition is closely related to the refrigerant steam flow rate flowing from the separator to the high temperature heat exchanger with the medium solution. The heat transfer performance of heat exchangers in the components influences the COP. The change in the overall heat transfer coefficient of absorber has much effect on the COP compared to other components.

  10. Double Feedforward Control System Based on Precise Disturbance Modeling for Optical Disk

    NASA Astrophysics Data System (ADS)

    Sakimura, Naohide; Nakazaki, Tatsuya; Ohishi, Kiyoshi; Miyazaki, Toshimasa; Koide, Daiichi; Tokumaru, Haruki; Takano, Yoshimichi

    2013-09-01

    Optical disk drive systems must realize high-precision tracking control for their proper operation. For this purpose, we previously proposed a tracking control system that is composed of a high-gain servo controller (HGSC) and a feedforward controller with an equivalent-perfect tracking control (E-PTC) system. However, it is difficult to design the control parameter for actual multi-harmonic disturbances. In this paper, we propose a precise disturbance model of an actual optical disk using the experimental spectrum data of a feedback controller and describe the design of a fine tracking control system. In addition, we propose a double feedforward control (DFFC) system for further high-precision control. The proposed DFFC system is constructed using two zero phase error tracking (ZPET) control systems based on error prediction and trajectory command prediction. Our experimental results confirm that the proposed system effectively suppresses the tracking error at 6000 rpm, which is the disk rotation speed of Digital Versatile Disk Recordable (DVD+R).

  11. Double Roles of Macrophages in Human Neuroimmune Diseases and Their Animal Models

    PubMed Central

    Fan, Xueli; Zhang, Hongliang; Cheng, Yun; Jiang, Xinmei; Zhu, Jie

    2016-01-01

    Macrophages are important immune cells of the innate immune system that are involved in organ-specific homeostasis and contribute to both pathology and resolution of diseases including infections, cancer, obesity, atherosclerosis, and autoimmune disorders. Multiple lines of evidence point to macrophages as a remarkably heterogeneous cell type. Different phenotypes of macrophages exert either proinflammatory or anti-inflammatory roles depending on the cytokines and other mediators that they are exposed to in the local microenvironment. Proinflammatory macrophages secrete detrimental molecules to induce disease development, while anti-inflammatory macrophages produce beneficial mediators to promote disease recovery. The conversion of the phenotypes of macrophages can regulate the initiation, development, and recovery of autoimmune diseases. Human neuroimmune diseases majorly include multiple sclerosis (MS), neuromyelitis optica (NMO), myasthenia gravis (MG), and Guillain-Barré syndrome (GBS) and macrophages contribute to the pathogenesis of these neuroimmune diseases. In this review, we summarize the double roles of macrophage in neuroimmune diseases and their animal models to further explore the mechanisms of macrophages involved in the pathogenesis of these disorders, which may provide a potential therapeutic approach for these disorders in the future. PMID:27034594

  12. "Squishy capacitor" model for electrical double layers and the stability of charged interfaces.

    PubMed

    Partenskii, Michael B; Jordan, Peter C

    2009-07-01

    Negative capacitance (NC), predicted by various electrical double layer (EDL) theories, is critically reviewed. Physically possible for individual components of the EDL, the compact or diffuse layer, it is strictly prohibited for the whole EDL or for an electrochemical cell with two electrodes. However, NC is allowed for the artificial conditions of sigma control, where an EDL is described by the equilibrium electric response of electrolyte to a field of fixed, and typically uniform, surface charge-density distributions, sigma. The contradiction is only apparent; in fact local sigma cannot be set independently, but is established by the equilibrium response to physically controllable variables, i.e., applied voltage phi (phi control) or total surface charge q (q control). NC predictions in studies based on sigma control signify potential instabilities and phase transitions for physically realizable conditions. Building on our previous study of phi control [M. B. Partenskii and P. C. Jordan, Phys. Rev. E 77, 061117 (2008)], here we analyze critical behavior under q control, clarifying the basic picture using an exactly solvable "squishy capacitor" toy model. We find that phi can change discontinuously in the presence of a lateral transition, specify stability conditions for an electrochemical cell, analyze the origin of the EDL's critical point in terms of compact and diffuse serial contributions, and discuss perspectives and challenges for theoretical studies not limited by sigma control.

  13. Deconstruction of Lignin Model Compounds and Biomass-Derived Lignin using Layered Double Hydroxide Catalysts

    SciTech Connect

    Chmely, S. C.; McKinney, K. A.; Lawrence, K. R.; Sturgeon, M.; Katahira, R.; Beckham, G. T.

    2013-01-01

    Lignin is an underutilized value stream in current biomass conversion technologies because there exist no economic and technically feasible routes for lignin depolymerization and upgrading. Base-catalyzed deconstruction (BCD) has been applied for lignin depolymerization (e.g., the Kraft process) in the pulp and paper industry for more than a century using aqueous-phase media. However, these efforts require treatment to neutralize the resulting streams, which adds significantly to the cost of lignin deconstruction. To circumvent the need for downstream treatment, here we report recent advances in the synthesis of layered double hydroxide and metal oxide catalysts to be applied to the BCD of lignin. These catalysts may prove more cost-effective than liquid-phase, non-recyclable base, and their use obviates downstream processing steps such as neutralization. Synthetic procedures for various transition-metal containing catalysts, detailed kinetics measurements using lignin model compounds, and results of the application of these catalysts to biomass-derived lignin will be presented.

  14. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.

    PubMed

    Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M

    2016-05-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

  15. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  16. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C.; Mason, John J.

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  17. Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.

    PubMed

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-01-01

    The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  18. Longitudinal Analysis of Discussion Topics in an Online Breast Cancer Community using Convolutional Neural Networks.

    PubMed

    Zhang, Shaodian; Grave, Edouard; Sklar, Elizabeth; Elhadad, Noémie

    2017-03-18

    Identifying topics of discussions in online health communities (OHC) is critical to various information extraction applications, but can be difficult because topics of OHC content are usually heterogeneous and domain-dependent. In this paper, we provide a multi-class schema, an annotated dataset, and supervised classifiers based on convolutional neural network (CNN) and other models for the task of classifying discussion topics. We apply the CNN classifier to the most popular breast cancer online community, and carry out cross-sectional and longitudinal analyses to show topic distributions and topic dynamics throughout members' participation. Our experimental results suggest that CNN outperforms other classifiers in the task of topic classification and identify several patterns and trajectories. For example, although members discuss mainly disease-related topics, their interest may change through time and vary with their disease severities.

  19. Kinetic Energy of Hydrocarbons as a Function of Electron Density and Convolutional Neural Networks.

    PubMed

    Yao, Kun; Parkhill, John

    2016-03-08

    We demonstrate a convolutional neural network trained to reproduce the Kohn-Sham kinetic energy of hydrocarbons from an input electron density. The output of the network is used as a nonlocal correction to conventional local and semilocal kinetic functionals. We show that this approximation qualitatively reproduces Kohn-Sham potential energy surfaces when used with conventional exchange correlation functionals. The density which minimizes the total energy given by the functional is examined in detail. We identify several avenues to improve on this exploratory work, by reducing numerical noise and changing the structure of our functional. Finally we examine the features in the density learned by the neural network to anticipate the prospects of generalizing these models.

  20. Segmenting delaminations in carbon fiber reinforced polymer composite CT using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Sammons, Daniel; Winfree, William P.; Burke, Eric; Ji, Shuiwang

    2016-02-01

    Nondestructive evaluation (NDE) utilizes a variety of techniques to inspect various materials for defects without causing changes to the material. X-ray computed tomography (CT) produces large volumes of three dimensional image data. Using the task of identifying delaminations in carbon fiber reinforced polymer (CFRP) composite CT, this work shows that it is possible to automate the analysis of these large volumes of CT data using a machine learning model known as a convolutional neural network (CNN). Further, tests on simulated data sets show that with a robust set of experimental data, it may be possible to go beyond just identification and instead accurately characterize the size and shape of the delaminations with CNNs.

  1. New population synthesis model Preliminary results for close double white dwarf populations

    NASA Astrophysics Data System (ADS)

    Toonen, Silvia; Nelemans, Gijs; Portegies Zwart, Simon F.

    2010-11-01

    An update is presented to the software package SeBa (Portegies Zwart and Verbunt [1], Nelemans et al. [2]) for simulating single star and binary evolution in which new stellar evolution tracks (Hurley et al. [3]) have been implemented. SeBa is applied to study the population of close double white dwarf and the delay time distribution of double white dwarf mergers that may lead to Supernovae Type Ia.

  2. A New Population Synthesis Model: Preliminary Results for Close Double White Dwarf Populations

    NASA Astrophysics Data System (ADS)

    Toonen, Silvia; Nelemans, Gijs; Portegies Zwart, Simon F.

    2010-12-01

    An update is presented to the software package SeBa (Portegies Zwart and Verbunt [1], Nelemans et al. [2]) for simulating single star and binary evolution in which new stellar evolution tracks (Hurley et al. [3]) have been implemented. SeBa is applied to study the population of close double white dwarf and the delay time distribution of double white dwarf mergers that may lead to Supernovae Type Ia.

  3. The effect of whitening transformation on pooling operations in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua

    2015-12-01

    Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.

  4. Model simulation for periodic double-peaked outbursts in blazar OJ 287: binary black hole plus lighthouse effect

    NASA Astrophysics Data System (ADS)

    Qian, Shan-Jie

    2015-05-01

    The mechanism of formation for double-peaked optical outbursts observed in blazar OJ 287 is studied. It is shown that they could be explained in terms of a lighthouse effect for superluminal optical knots ejected from the center of the galaxy that move along helical magnetic fields. It is assumed that the orbital motion of the secondary black hole in the supermassive binary black hole system induces the 12-year quasi-periodicity in major optical outbursts by the interaction with the disk around the primary black hole. This interaction between the secondary black hole and the disk of the primary black hole (e.g. tidal effects or magnetic coupling) excites or injects plasmons (or relativistic plasmas plus magnetic field) into the jet which form superluminal knots. These knots are assumed to move along helical magnetic field lines to produce the optical double-peaked outbursts by the lighthouse effect. The four double-peaked outbursts observed in 1972, 1983, 1995 and 2005 are simulated using this model. It is shown that such lighthouse models are quite plausible and feasible for fitting the double-flaring behavior of the outbursts. The main requirement may be that in OJ 287 there exists a rather long (~40-60 pc) highly collimated zone, where the lighthouse effect occurs.

  5. Double Stimulation in the Waiting Experiment with Collectives: Testing a Vygotskian Model of the Emergence of Volitional Action.

    PubMed

    Sannino, Annalisa

    2016-03-01

    This study explores what human conduct looks like when research embraces uncertainty and distance itself from the dominant methodological demands of control and predictability. The context is the waiting experiment originally designed in Kurt Lewin's research group, discussed by Vygotsky as an instance among a range of experiments related to his notion of double stimulation. Little attention has been paid to this experiment, despite its great heuristic potential for charting the terrain of uncertainty and agency in experimental settings. Behind the notion of double stimulation lays Vygotsky's distinctive view of human beings' ability to intentionally shape their actions. Accordingly, human beings in situations of uncertainty and cognitive incongruity can rely on artifacts which serve the function of auxiliary motives and which help them undertake volitional actions. A double stimulation model depicting how such actions emerge is tested in a waiting experiment conducted with collectives, in contrast with a previous waiting experiment conducted with individuals. The model, validated in the waiting experiment with individual participants, applies only to a limited extent to the collectives. The analysis shows the extent to which double stimulation takes place in the waiting experiment with collectives, the differences between the two experiments, and what implications can be drawn for an expanded view on experiments.

  6. Adjustment in mothers of children with Asperger syndrome: an application of the double ABCX model of family adjustment.

    PubMed

    Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate

    2005-05-01

    The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.

  7. Double screening

    SciTech Connect

    Gratia, Pierre; Hu, Wayne; Joyce, Austin; Ribeiro, Raquel H.

    2016-06-15

    Attempts to modify gravity in the infrared typically require a screening mechanism to ensure consistency with local tests of gravity. These screening mechanisms fit into three broad classes; we investigate theories which are capable of exhibiting more than one type of screening. Specifically, we focus on a simple model which exhibits both Vainshtein and kinetic screening. We point out that due to the two characteristic length scales in the problem, the type of screening that dominates depends on the mass of the sourcing object, allowing for different phenomenology at different scales. We consider embedding this double screening phenomenology in a broader cosmological scenario and show that the simplest examples that exhibit double screening are radiatively stable.

  8. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  9. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation.

    PubMed

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-03-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.

  10. Neurodegeneration in a Drosophila model of adrenoleukodystrophy: the roles of the Bubblegum and Double bubble acyl-CoA synthetases

    PubMed Central

    Sivachenko, Anna; Gordon, Hannah B.; Kimball, Suzanne S.; Gavin, Erin J.; Bonkowsky, Joshua L.; Letsou, Anthea

    2016-01-01

    ABSTRACT Debilitating neurodegenerative conditions with metabolic origins affect millions of individuals worldwide. Still, for most of these neurometabolic disorders there are neither cures nor disease-modifying therapies, and novel animal models are needed for elucidation of disease pathology and identification of potential therapeutic agents. To date, metabolic neurodegenerative disease has been modeled in animals with only limited success, in part because existing models constitute analyses of single mutants and have thus overlooked potential redundancy within metabolic gene pathways associated with disease. Here, we present the first analysis of a very-long-chain acyl-CoA synthetase (ACS) double mutant. We show that the Drosophila bubblegum (bgm) and double bubble (dbb) genes have overlapping functions, and that the consequences of double knockout of both bubblegum and double bubble in the fly brain are profound, affecting behavior and brain morphology, and providing the best paradigm to date for an animal model of adrenoleukodystrophy (ALD), a fatal childhood neurodegenerative disease associated with the accumulation of very-long-chain fatty acids. Using this more fully penetrant model of disease to interrogate brain morphology at the level of electron microscopy, we show that dysregulation of fatty acid metabolism via disruption of ACS function in vivo is causal of neurodegenerative pathologies that are evident in both neuronal cells and their supporting cell populations, and leads ultimately to lytic cell death in affected areas of the brain. Finally, in an extension of our model system to the study of human disease, we describe our identification of an individual with leukodystrophy who harbors a rare mutation in SLC27a6 (encoding a very-long-chain ACS), a human homolog of bgm and dbb. PMID:26893370

  11. Low-dose CT via convolutional neural network

    PubMed Central

    Chen, Hu; Zhang, Yi; Zhang, Weihua; Liao, Peixi; Li, Ke; Zhou, Jiliu; Wang, Ge

    2017-01-01

    In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods. PMID:28270976

  12. Drug-Drug Interaction Extraction via Convolutional Neural Networks

    PubMed Central

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831

  13. Convolution quadrature for the wave equation with impedance boundary conditions

    NASA Astrophysics Data System (ADS)

    Sauter, S. A.; Schanz, M.

    2017-04-01

    We consider the numerical solution of the wave equation with impedance boundary conditions and start from a boundary integral formulation for its discretization. We develop the generalized convolution quadrature (gCQ) to solve the arising acoustic retarded potential integral equation for this impedance problem. For the special case of scattering from a spherical object, we derive representations of analytic solutions which allow to investigate the effect of the impedance coefficient on the acoustic pressure analytically. We have performed systematic numerical experiments to study the convergence rates as well as the sensitivity of the acoustic pressure from the impedance coefficients. Finally, we apply this method to simulate the acoustic pressure in a building with a fairly complicated geometry and to study the influence of the impedance coefficient also in this situation.

  14. Discovering characteristic landmarks on ancient coins using convolutional networks

    NASA Astrophysics Data System (ADS)

    Kim, Jongpil; Pavlovic, Vladimir

    2017-01-01

    We propose a method to find characteristic landmarks and recognize ancient Roman imperial coins using deep convolutional neural networks (CNNs) combined with expert-designed domain hierarchies. We first propose a framework to recognize Roman coins that exploits the hierarchical knowledge structure embedded in the coin domain, which we combine with the CNN-based category classifiers. We next formulate an optimization problem to discover class-specific salient coin regions. Analysis of discovered salient regions confirms that they are largely consistent with human expert annotations. Experimental results show that the proposed framework is able to effectively recognize ancient Roman coins as well as successfully identify landmarks on the coins. For this research, we have collected a Roman coin dataset where all coins are annotated and consist of obverse (head) and reverse (tail) images.

  15. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  16. Finding the complete path and weight enumerators of convolutional codes

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I.

    1990-01-01

    A method for obtaining the complete path enumerator T(D, L, I) of a convolutional code is described. A system of algebraic equations is solved, using a new algorithm for computing determinants, to obtain T(D, L, I) for the (7,1/2) NASA standard code. Generating functions, derived from T(D, L, I) are used to upper bound Viterbi decoder error rates. This technique is currently feasible for constraint length K less than 10 codes. A practical, fast algorithm is presented for computing the leading nonzero coefficients of the generating functions used to bound the performance of constraint length K less than 20 codes. Code profiles with about 50 nonzero coefficients are obtained with this algorithm for the experimental K = 15, rate 1/4, code in the Galileo mission and for the proposed K = 15, rate 1/6, 2-dB code.

  17. Enhanced Line Integral Convolution with Flow Feature Detection

    NASA Technical Reports Server (NTRS)

    Lane, David; Okada, Arthur

    1996-01-01

    The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.

  18. Modifying real convolutional codes for protecting digital filtering systems

    NASA Technical Reports Server (NTRS)

    Redinbo, G. R.; Zagar, Bernhard

    1993-01-01

    A novel method is proposed for protecting digital filters from temporary and permanent failures that are not easily detected by conventional fault-tolerant computer design principles, on the basis of the error-detecting properties of real convolutional codes. Erroneous behavior is detected by externally comparing the calculated and regenerated parity samples. Great simplifications are obtainable by modifying the code structure to yield simplified parity channels with finite impulse response structures. A matrix equation involving the original parity values of the code and the polynomial of the digital filter's transfer function is formed, and row manipulations separate this equation into a set of homogeneous equations constraining the modifying scaling coefficients and another set which defines the code parity values' implementation.

  19. Stability Training for Convolutional Neural Nets in LArTPC

    NASA Astrophysics Data System (ADS)

    Lindsay, Matt; Wongjirad, Taritree

    2017-01-01

    Convolutional Neural Nets (CNNs) are the state of the art for many problems in computer vision and are a promising method for classifying interactions in Liquid Argon Time Projection Chambers (LArTPCs) used in neutrino oscillation experiments. Despite the good performance of CNN's, they are not without drawbacks, chief among them is vulnerability to noise and small perturbations to the input. One solution to this problem is a modification to the learning process called Stability Training developed by Zheng et al. We verify existing work and demonstrate volatility caused by simple Gaussian noise and also that the volatility can be nearly eliminated with Stability Training. We then go further and show that a traditional CNN is also vulnerable to realistic experimental noise and that a stability trained CNN remains accurate despite noise. This further adds to the optimism for CNNs for work in LArTPCs and other applications.

  20. Convolutional Neural Networks for patient-specific ECG classification.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Hamila, Ridha; Gabbouj, Moncef

    2015-01-01

    We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB).

  1. $\\mathtt {Deepr}$: A Convolutional Net for Medical Records.

    PubMed

    Nguyen, Phuoc; Tran, Truyen; Wickramasinghe, Nilmini; Venkatesh, Svetha

    2017-01-01

    Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive regular clinical motifs from irregular episodic records. We present Deepr (short for Deep record), a new end-to-end deep learning system that learns to extract features from medical records and predicts future risk automatically. Deepr transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. Deepr permits transparent inspection and visualization of its inner working. We validate Deepr on hospital data to predict unplanned readmission after discharge. Deepr achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space.

  2. Fast convolution with free-space Green's functions

    NASA Astrophysics Data System (ADS)

    Vico, Felipe; Greengard, Leslie; Ferrando, Miguel

    2016-10-01

    We introduce a fast algorithm for computing volume potentials - that is, the convolution of a translation invariant, free-space Green's function with a compactly supported source distribution defined on a uniform grid. The algorithm relies on regularizing the Fourier transform of the Green's function by cutting off the interaction in physical space beyond the domain of interest. This permits the straightforward application of trapezoidal quadrature and the standard FFT, with superalgebraic convergence for smooth data. Moreover, the method can be interpreted as employing a Nystrom discretization of the corresponding integral operator, with matrix entries which can be obtained explicitly and rapidly. This is of use in the design of preconditioners or fast direct solvers for a variety of volume integral equations. The method proposed permits the computation of any derivative of the potential, at the cost of an additional FFT.

  3. Rapid Exact Signal Scanning With Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Thom, Markus; Gritschneder, Franz

    2017-03-01

    A rigorous formulation of the dynamics of a signal processing scheme aimed at dense signal scanning without any loss in accuracy is introduced and analyzed. Related methods proposed in the recent past lack a satisfactory analysis of whether they actually fulfill any exactness constraints. This is improved through an exact characterization of the requirements for a sound sliding window approach. The tools developed in this paper are especially beneficial if Convolutional Neural Networks are employed, but can also be used as a more general framework to validate related approaches to signal scanning. The proposed theory helps to eliminate redundant computations and renders special case treatment unnecessary, resulting in a dramatic boost in efficiency particularly on massively parallel processors. This is demonstrated both theoretically in a computational complexity analysis and empirically on modern parallel processors.

  4. Plane-wave decomposition by spherical-convolution microphone array

    NASA Astrophysics Data System (ADS)

    Rafaely, Boaz; Park, Munhum

    2004-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  5. Truncation Depth Rule-of-Thumb for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  6. Radio frequency interference mitigation using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.

    2017-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  7. Star-galaxy classification using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Kim, Edward J.; Brunner, Robert J.

    2017-02-01

    Most existing star-galaxy classifiers use the reduced summary information from catalogues, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks (ConvNets) allow a machine to automatically learn the features directly from the data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep ConvNets directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey and the Canada-France-Hawaii Telescope Lensing Survey, we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey and the Large Synoptic Survey Telescope, because deep neural networks require very little, manual feature engineering.

  8. Digital Elevation Models Aid the Analysis of Double Layered Ejecta (DLE) Impact Craters on Mars

    NASA Astrophysics Data System (ADS)

    Mouginis-Mark, P. J.; Boyce, J. M.; Garbeil, H.

    2014-12-01

    Considerable debate has recently taken place concerning the origin of the inner and outer ejecta layers of double layered ejecta (DLE) craters on Mars. For craters in the diameter range ~10 to ~25 km, the inner ejecta layer of DLE craters displays characteristic grooves extending from the rim crest, and has led investigators to propose three hypotheses for their formation: (1) deposition of the primary ejecta and subsequent surface scouring by either atmospheric vortices or a base surge; (2) emplacement through a landslide of the near-rim crest ejecta; and (3) instabilities (similar to Gortler vortices) generated by high flow-rate, and high granular temperatures. Critical to resolving between these models is the topographic expression of both the ejecta layer and the groove geometry. To address this problem, we have made several digital elevation models (DEMs) from CTX and HiRISE stereo pairs using the Ames Stereo Pipeline at scales of 24 m/pixel and 1 m/pixel, respectively. These DEMs allow several key observations to be made that bear directly upon the origin of the grooves associated with DLE craters: (1) Grooves formed on the sloping ejecta layer surfaces right up to the preserved crater rim; (2) There is clear evidence that grooves traverse the topographic boundary between the inner and outer ejecta layers; and (3) There are at least two different sets of radial grooves, with smaller grooves imprinted upon the larger grooves. There are "deep-wide" grooves that have a width of ~200 m and a depth of ~10 m, and there are "shallow-narrow" grooves with a width of <50 m and depth <5 m. These two scales of grooves are not consistent with their formation analogous to a landslide. Two different sets of grooves would imply that, simultaneously, two different depths to the flow would have to exist if the grooves were formed by shear within the flow, something that is not physically possible. All three observations can only be consistent with a model of groove formation

  9. Sensitivity analysis of modelled responses of vegetation dynamics on the Tibetan Plateau to doubled CO2 and associated climate change

    NASA Astrophysics Data System (ADS)

    Qiu, Linjing; Liu, Xiaodong

    2016-04-01

    Increases in the atmospheric CO2 concentration affect both the global climate and plant metabolism, particularly for high-altitude ecosystems. Because of the limitations of field experiments, it is difficult to evaluate the responses of vegetation to CO2 increases and separate the effects of CO2 and associated climate change using direct observations at a regional scale. Here, we used the Community Earth System Model (CESM, version 1.0.4) to examine these effects. Initiated from bare ground, we simulated the vegetation composition and productivity under two CO2 concentrations (367 and 734 ppm) and associated climate conditions to separate the comparative contributions of doubled CO2 and CO2-induced climate change to the vegetation dynamics on the Tibetan Plateau (TP). The results revealed whether the individual effect of doubled CO2 and its induced climate change or their combined effects caused a decrease in the foliage projective cover (FPC) of C3 arctic grass on the TP. Both doubled CO2 and climate change had a positive effect on the FPC of the temperate and tropical tree plant functional types (PFTs) on the TP, but doubled CO2 led to FPC decreases of C4 grass and broadleaf deciduous shrubs, whereas the climate change resulted in FPC decrease in C3 non-arctic grass and boreal needleleaf evergreen trees. Although the combination of the doubled CO2 and associated climate change increased the area-averaged leaf area index (LAI), the effect of doubled CO2 on the LAI increase (95 %) was larger than the effect of CO2-induced climate change (5 %). Similarly, the simulated gross primary productivity (GPP) and net primary productivity (NPP) were primarily sensitive to the doubled CO2, compared with the CO2-induced climate change, which alone increased the regional GPP and NPP by 251.22 and 87.79 g C m-2 year-1, respectively. Regionally, the vegetation response was most noticeable in the south-eastern TP. Although both doubled CO2 and associated climate change had a

  10. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    SciTech Connect

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; Limbrick, Daniel B.; Black, Jeffrey D.

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.

  11. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  12. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    DOE PAGES

    Gallmeier, F. X.; Iverson, E. B.; Lu, W.; ...

    2016-01-08

    Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. It has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. Particularly, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX codemore » to include a single-crystal neutron scattering model and neutron reflection/refraction physics. Furthermore, we have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of

  13. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    SciTech Connect

    Gallmeier, F. X.; Iverson, E. B.; Lu, W.; Baxter, D. V.; Muhrer, G.; Ansell, S.

    2016-01-08

    Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. It has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. Particularly, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. Furthermore, we have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of

  14. Mesh Convolutional Restricted Boltzmann Machines for Unsupervised Learning of Features With Structure Preservation on 3-D Meshes.

    PubMed

    Han, Zhizhong; Liu, Zhenbao; Han, Junwei; Vong, Chi-Man; Bu, Shuhui; Chen, Chun Long Philip

    2016-06-30

    Discriminative features of 3-D meshes are significant to many 3-D shape analysis tasks. However, handcrafted descriptors and traditional unsupervised 3-D feature learning methods suffer from several significant weaknesses: 1) the extensive human intervention is involved; 2) the local and global structure information of 3-D meshes cannot be preserved, which is in fact an important source of discriminability; 3) the irregular vertex topology and arbitrary resolution of 3-D meshes do not allow the direct application of the popular deep learning models; 4) the orientation is ambiguous on the mesh surface; and 5) the effect of rigid and nonrigid transformations on 3-D meshes cannot be eliminated. As a remedy, we propose a deep learning model with a novel irregular model structure, called mesh convolutional restricted Boltzmann machines (MCRBMs). MCRBM aims to simultaneously learn structure-preserving local and global features from a novel raw representation, local function energy distribution. In addition, multiple MCRBMs can be stacked into a deeper model, called mesh convolutional deep belief networks (MCDBNs). MCDBN employs a novel local structure preserving convolution (LSPC) strategy to convolve the geometry and the local structure learned by the lower MCRBM to the upper MCRBM. LSPC facilitates resolving the challenging issue of the orientation ambiguity on the mesh surface in MCDBN. Experiments using the proposed MCRBM and MCDBN were conducted on three common aspects: global shape retrieval, partial shape retrieval, and shape correspondence. Results show that the features learned by the proposed methods outperform the other state-of-the-art 3-D shape features.

  15. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    NASA Astrophysics Data System (ADS)

    Gallmeier, F. X.; Iverson, E. B.; Lu, W.; Baxter, D. V.; Muhrer, G.; Ansell, S.

    2016-04-01

    Neutron transport simulation codes are indispensable tools for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modeled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4, and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential phenomena for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal's Bragg cut-off from locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon and void layers. Finally we simulated the convoluted moderator experiments described by Iverson et al. and found satisfactory agreement between the measurements and the simulations performed with the tools we have developed.

  16. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.

    PubMed

    He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian

    2015-09-01

    Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.

  17. Human Umbilical Cord Blood Mononuclear Cells in a Double-Hit Model of Bronchopulmonary Dysplasia in Neonatal Mice

    PubMed Central

    Mildau, Céline; Shen, Jie; Kasoha, Mariz; Laschke, Matthias W.; Roolfs, Torge; Schmiedl, Andreas; Tschernig, Thomas; Bieback, Karen; Gortner, Ludwig

    2013-01-01

    Background Bronchopulmonary dysplasia (BPD) presents a major threat of very preterm birth and treatment options are still limited. Stem cells from different sources have been used successfully in experimental BPD, induced by postnatal hyperoxia. Objectives We investigated the effect of umbilical cord blood mononuclear cells (MNCs) in a new double-hit mouse model of BPD. Methods For the double-hit, date mated mice were subjected to hypoxia and thereafter the offspring was exposed to hyperoxia. Human umbilical cord blood MNCs were given intraperitoneally by day P7. As outcome variables were defined: physical development (auxology), lung structure (histomorphometry), expression of markers for lung maturation and inflammation on mRNA and protein level. Pre- and postnatal normoxic pups and sham treated double-hit pups served as control groups. Results Compared to normoxic controls, sham treated double-hit animals showed impaired physical and lung development with reduced alveolarization and increased thickness of septa. Electron microscopy revealed reduced volume density of lamellar bodies. Pulmonary expression of mRNA for surfactant proteins B and C, Mtor and Crabp1 was reduced. Expression of Igf1 was increased. Treatment with umbilical cord blood MNCs normalized thickness of septa and mRNA expression of Mtor to levels of normoxic controls. Tgfb3 mRNA expression and pro-inflammatory IL-1β protein concentration were decreased. Conclusion The results of our study demonstrate the therapeutic potential of umbilical cord blood MNCs in a new double-hit model of BPD in newborn mice. We found improved lung structure and effects on molecular level. Further studies are needed to address the role of systemic administration of MNCs in experimental BPD. PMID:24069341

  18. Building dynamical models from data and prior knowledge: The case of the first period-doubling bifurcation

    NASA Astrophysics Data System (ADS)

    Aguirre, Luis Antonio; Furtado, Edgar Campos

    2007-10-01

    This paper reviews some aspects of nonlinear model building from data with (gray box) and without (black box) prior knowledge. The model class is very important because it determines two aspects of the final model, namely (i) the type of nonlinearity that can be accurately approximated and (ii) the type of prior knowledge that can be taken into account. Such features are usually in conflict when it comes to choosing the model class. The problem of model structure selection is also reviewed. It is argued that such a problem is philosophically different depending on the model class and it is suggested that the choice of model class should be performed based on the type of a priori available. A procedure is proposed to build polynomial models from data on a Poincaré section and prior knowledge about the first period-doubling bifurcation, for which the normal form is also polynomial. The final models approximate dynamical data in a least-squares sense and, by design, present the first period-doubling bifurcation at a specified value of parameters. The procedure is illustrated by means of simulated examples.

  19. A compact quantum correction model for symmetric double gate metal-oxide-semiconductor field-effect transistor

    SciTech Connect

    Cho, Edward Namkyu; Shin, Yong Hyeon; Yun, Ilgu

    2014-11-07

    A compact quantum correction model for a symmetric double gate (DG) metal-oxide-semiconductor field-effect transistor (MOSFET) is investigated. The compact quantum correction model is proposed from the concepts of the threshold voltage shift (ΔV{sub TH}{sup QM}) and the gate capacitance (C{sub g}) degradation. First of all, ΔV{sub TH}{sup QM} induced by quantum mechanical (QM) effects is modeled. The C{sub g} degradation is then modeled by introducing the inversion layer centroid. With ΔV{sub TH}{sup QM} and the C{sub g} degradation, the QM effects are implemented in previously reported classical model and a comparison between the proposed quantum correction model and numerical simulation results is presented. Based on the results, the proposed quantum correction model can be applicable to the compact model of DG MOSFET.

  20. Kicked-Harper model versus on-resonance double-kicked rotor model: from spectral difference to topological equivalence.

    PubMed

    Wang, Hailong; Ho, Derek Y H; Lawton, Wayne; Wang, Jiao; Gong, Jiangbin

    2013-11-01

    Recent studies have established that, in addition to the well-known kicked-Harper model (KHM), an on-resonance double-kicked rotor (ORDKR) model also has Hofstadter's butterfly Floquet spectrum, with strong resemblance to the standard Hofstadter spectrum that is a paradigm in studies of the integer quantum Hall effect. Earlier it was shown that the quasienergy spectra of these two dynamical models (i) can exactly overlap with each other if an effective Planck constant takes irrational multiples of 2π and (ii) will be different if the same parameter takes rational multiples of 2π. This work makes detailed comparisons between these two models, with an effective Planck constant given by 2πM/N, where M and N are coprime and odd integers. It is found that the ORDKR spectrum (with two periodic kicking sequences having the same kick strength) has one flat band and N-1 nonflat bands with the largest bandwidth decaying in a power law as ~K(N+2), where K is a kick strength parameter. The existence of a flat band is strictly proven and the power-law scaling, numerically checked for a number of cases, is also analytically proven for a three-band case. By contrast, the KHM does not have any flat band and its bandwidths scale linearly with K. This is shown to result in dramatic differences in dynamical behavior, such as transient (but extremely long) dynamical localization in ORDKR, which is absent in the KHM. Finally, we show that despite these differences, there exist simple extensions of the KHM and ORDKR model (upon introducing an additional periodic phase parameter) such that the resulting extended KHM and ORDKR model are actually topologically equivalent, i.e., they yield exactly the same Floquet-band Chern numbers and display topological phase transitions at the same kick strengths. A theoretical derivation of this topological equivalence is provided. These results are also of interest to our current understanding of quantum-classical correspondence considering that

  1. Kicked-Harper model versus on-resonance double-kicked rotor model: From spectral difference to topological equivalence

    NASA Astrophysics Data System (ADS)

    Wang, Hailong; Ho, Derek Y. H.; Lawton, Wayne; Wang, Jiao; Gong, Jiangbin

    2013-11-01

    Recent studies have established that, in addition to the well-known kicked-Harper model (KHM), an on-resonance double-kicked rotor (ORDKR) model also has Hofstadter's butterfly Floquet spectrum, with strong resemblance to the standard Hofstadter spectrum that is a paradigm in studies of the integer quantum Hall effect. Earlier it was shown that the quasienergy spectra of these two dynamical models (i) can exactly overlap with each other if an effective Planck constant takes irrational multiples of 2π and (ii) will be different if the same parameter takes rational multiples of 2π. This work makes detailed comparisons between these two models, with an effective Planck constant given by 2πM/N, where M and N are coprime and odd integers. It is found that the ORDKR spectrum (with two periodic kicking sequences having the same kick strength) has one flat band and N-1 nonflat bands with the largest bandwidth decaying in a power law as ˜KN+2, where K is a kick strength parameter. The existence of a flat band is strictly proven and the power-law scaling, numerically checked for a number of cases, is also analytically proven for a three-band case. By contrast, the KHM does not have any flat band and its bandwidths scale linearly with K. This is shown to result in dramatic differences in dynamical behavior, such as transient (but extremely long) dynamical localization in ORDKR, which is absent in the KHM. Finally, we show that despite these differences, there exist simple extensions of the KHM and ORDKR model (upon introducing an additional periodic phase parameter) such that the resulting extended KHM and ORDKR model are actually topologically equivalent, i.e., they yield exactly the same Floquet-band Chern numbers and display topological phase transitions at the same kick strengths. A theoretical derivation of this topological equivalence is provided. These results are also of interest to our current understanding of quantum-classical correspondence considering that the

  2. Phase field modelling on the growth dynamics of double voids of different sizes during czochralski silicon crystal growth

    NASA Astrophysics Data System (ADS)

    Guan, X. J.; Wang, J.

    2017-02-01

    To investigate their dynamics and interaction mechanisms, the growth process of the two voids with different sizes during Czochralski silicon crystal growth were simulated by use of an established phase field model and its corresponding program code. On the basis of the several phase field numerical simulation cases, the evolution laws of the double voids were acquired as follows: the phase field model is capable to simulate the growth process of double voids with different sizes; there are two modes of their growth, that is, either mutual integration or competitive growth; the exact moment of their fusion can be also captured, and it is τ of 7.078 (simulation time step of 14156) for the initial vacancy concentration of 0.02 and the initial space between two void centers of 44Δx.

  3. Control of crystallite and particle size in the synthesis of layered double hydroxides: Macromolecular insights and a complementary modeling tool.

    PubMed

    Galvão, Tiago L P; Neves, Cristina S; Caetano, Ana P F; Maia, Frederico; Mata, Diogo; Malheiro, Eliana; Ferreira, Maria J; Bastos, Alexandre C; Salak, Andrei N; Gomes, José R B; Tedim, João; Ferreira, Mário G S

    2016-04-15

    Zinc-aluminum layered double hydroxides with nitrate intercalated (Zn(n)Al-NO3, n=Zn/Al) is an intermediate material for the intercalation of different functional molecules used in a wide range of industrial applications. The synthesis of Zn(2)Al-NO3 was investigated considering the time and temperature of hydrothermal treatment. By examining the crystallite size in two different directions, hydrodynamic particle size, morphology, crystal structure and chemical species in solution, it was possible to understand the crystallization and dissolution processes involved in the mechanisms of crystallite and particle growth. In addition, hydrogeochemical modeling rendered insights on the speciation of different metal cations in solution. Therefore, this tool can be a promising solution to model and optimize the synthesis of layered double hydroxide-based materials for industrial applications.

  4. The novel double-folded structure of d(GCATGCATGC): a possible model for triplet-repeat sequences.

    PubMed

    Thirugnanasambandam, Arunachalam; Karthik, Selvam; Mandal, Pradeep Kumar; Gautham, Namasivayam

    2015-10-01

    The structure of the decadeoxyribonucleotide d(GCATGCATGC) is presented at a resolution of 1.8 Å. The decamer adopts a novel double-folded structure in which the direction of progression of the backbone changes at the two thymine residues. Intra-strand stacking interactions (including an interaction between the endocylic O atom of a ribose moiety and the adjacent purine base), hydrogen bonds and cobalt-ion interactions stabilize the double-folded structure of the single strand. Two such double-folded strands come together in the crystal to form a dimer. Inter-strand Watson-Crick hydrogen bonds form four base pairs. This portion of the decamer structure is similar to that observed in other previously reported oligonucleotide structures and has been dubbed a `bi-loop'. Both the double-folded single-strand structure, as well as the dimeric bi-loop structure, serve as starting points to construct models for triplet-repeat DNA sequences, which have been implicated in many human diseases.

  5. FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION.

    PubMed

    Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement.

  6. FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION

    PubMed Central

    Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang

    2016-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement. PMID:27668065

  7. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks

    PubMed Central

    Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni

    2015-01-01

    Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298

  8. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    PubMed

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  9. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks.

    PubMed

    Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni

    2015-01-01

    Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient's response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a "radiomics" approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models.

  10. Electron-electron double resonance in electron spin echo: Model biradical systems and the sensitized photolysis of decalin

    NASA Astrophysics Data System (ADS)

    Milov, A. D.; Ponomarev, A. B.; Tsvetkov, Yu. D.

    1984-09-01

    Model systems, comprising frozen glassy solutions of stabilized radicals and biradicals of the nitroxyl type, have been used to test the applicability of electron-electron double resonance in electron spin echo (ELDOR ESE) in studies of the spatial distributions of free radicals arranged in groups in solids. The method was used to investigate the spatial distribution of alkyl radicals generated by the sensitized photolysis of glassy naphthalene solutions in decalin at 77 K. and detected radical pairs.

  11. Coarse-grained free energy functions for studying protein conformational changes: a double-well network model.

    PubMed

    Chu, Jhih-Wei; Voth, Gregory A

    2007-12-01

    In this work, a double-well network model (DWNM) is presented for generating a coarse-grained free energy function that can be used to study the transition between reference conformational states of a protein molecule. Compared to earlier work that uses a single, multidimensional double-well potential to connect two conformational states, the DWNM uses a set of interconnected double-well potentials for this purpose. The DWNM free energy function has multiple intermediate states and saddle points, and is hence a "rough" free energy landscape. In this implementation of the DWNM, the free energy function is reduced to an elastic-network model representation near the two reference states. The effects of free energy function roughness on the reaction pathways of protein conformational change is demonstrated by applying the DWNM to the conformational changes of two protein systems: the coil-to-helix transition of the DB-loop in G-actin and the open-to-closed transition of adenylate kinase. In both systems, the rough free energy function of the DWNM leads to the identification of distinct minimum free energy paths connecting two conformational states. These results indicate that while the elastic-network model captures the low-frequency vibrational motions of a protein, the roughness in the free energy function introduced by the DWNM can be used to characterize the transition mechanism between protein conformations.

  12. Aquifer response to stream-stage and recharge variations. II. Convolution method and applications

    USGS Publications Warehouse

    Barlow, P.M.; DeSimone, L.A.; Moench, A.F.

    2000-01-01

    In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped

  13. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences.

    PubMed

    Quang, Daniel; Xie, Xiaohui

    2016-06-20

    Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.

  14. Application of the Double-Tangent Construction of Coexisting Phases to Any Type of Phase Equilibrium for Binary Systems Modeled with the Gamma-Phi Approach

    ERIC Educational Resources Information Center

    Jaubert, Jean-Noël; Privat, Romain

    2014-01-01

    The double-tangent construction of coexisting phases is an elegant approach to visualize all the multiphase binary systems that satisfy the equality of chemical potentials and to select the stable state. In this paper, we show how to perform the double-tangent construction of coexisting phases for binary systems modeled with the gamma-phi…

  15. Single vs. double layer suturing method repair of the urethral plate in the rabbit model of hypospadias

    PubMed Central

    Shirazi, Mehdi; Rahimi, Mohammad

    2016-01-01

    Introduction There are different methods of urethroplasty in hypospadias. The present study aimed to compare the repair of the urethral plate by single vs. double layer suturing. Material and methods Fifteen male rabbits were assigned to the control, single layer, and double layer urethral plate suturing groups (n = 5). Experimental hypospadias was induced in the second and third groups and the urethral plates were sutured. After two weeks, the penis was dissected out and underwent histopathological processing. Stereological studies were applied to obtain quantitative histological data regarding the structure of the urethra and the related part of the corpus spongiosum. Results Volume density of the urethral epithelium (the fraction of unit volume of the urethra occupied by its epithelium) was higher in the single layer suturing group when compared to the double layer or control groups (p <0.01). Additionally, the volume density of the urethral lumen (the fraction of the corpus spongiosum that is occupied by the urethral lumen) in the single versus the double layer suturing groups was respectively 2.4 and 2 folds higher than that in the control group (p <0.01). Besides, the volume density of the lumen was significantly higher in the single layer suturing when compared to the double layer suturing group (p <0.01). However, no significant difference was observed among the study groups regarding the volume density of the collagen and vessels in the incised site of the penis which implied that the fraction of the urethra and surrounding corpus spongiosum was occupied by collagen and vessels. Conclusions Urethral plate repair by the single layer suturing method could be accompanied by higher epithelialization and wider lumen in the rabbit model of hypospadias. PMID:28127462

  16. A statistical model for QTL mapping in polysomic autotetraploids underlying double reduction

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Abstract: As a group of economically important species, linkage mapping of polysomic autotetraploids, including potato, sugarcane and rose, is difficult to conduct due to their unique meiotic property of double reduction that allows sister chromatids to enter into the same gamete. We desc...

  17. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  18. X-ray diffraction line profile analysis of nanostructured nickel oxide: Shape factor and convolution of crystallite size and microstrain contributions

    NASA Astrophysics Data System (ADS)

    Maniammal, K.; Madhu, G.; Biju, V.

    2017-01-01

    Nanostructured nickel oxide is synthesized through a chemical route and annealed at different temperatures. Contribution of crystallite size and microstrain to X-ray diffraction line broadening are analyzed by Williamson-Hall analysis using isotropic and anisotropic models. None of the models perform well in the case of samples with smaller average crystallite sizes. For sample with crystallite size 3 nm all models show negative slope which is physically meaningless. Analysis of shape factor shows that the line profiles are more Gaussian like. Size-strain plot method, which assumes a different convolution of the crystallite size and microstrain contributions, is found to be most suitable. The study highlights the fact that the convolution of crystallite size and microstrain contributions may differ for samples and should be taken into account while analyzing the observed line broadening. Microstrain values show a regular decrease with increase in the annealing temperature.

  19. Particle-in-cell modeling of spacecraft-plasma interaction effects on double-probe electric field measurements

    NASA Astrophysics Data System (ADS)

    Miyake, Y.; Usui, H.

    2016-12-01

    The double-probe technique, commonly used for electric field measurements in magnetospheric plasmas, is susceptible to environmental perturbations caused by spacecraft-plasma interactions. To better model the interactions, we have extended the existing particle-in-cell simulation technique so that it accepts very small spacecraft structures, such as thin wire booms, by incorporating an accurate potential field solution calculated based on the boundary element method. This immersed boundary element approach is effective for quantifying the impact of geometrically small but electrically large spacecraft elements on the formation of sheaths or wakes. The developed model is applied to the wake environment near a Cluster satellite for three distinctive plasma conditions: the solar wind, the tail lobe, and just outside the plasmapause. The simulations predict the magnitudes and waveforms of wake-derived spurious electric fields, and these are in good agreement with in situ observations. The results also reveal the detailed structure of potential around the double probes. It shows that any probes hardly experience a negative wake potential in their orbit, and instead, they experience an unbalanced drop rate of a large potential hill that is created by the spacecraft and boom bodies. As a by-product of the simulations, we also found a photoelectron short-circuiting effect that is analogous to the well-known short-circuiting effect due to the booms of a double-probe instrument. The effect is sustained by asymmetric photoelectron distributions that cancel out the external electric field.

  20. Chemically assembled double-dot single-electron transistor analyzed by the orthodox model considering offset charge

    SciTech Connect

    Kano, Shinya; Maeda, Kosuke; Majima, Yutaka; Tanaka, Daisuke; Sakamoto, Masanori; Teranishi, Toshiharu

    2015-10-07

    We present the analysis of chemically assembled double-dot single-electron transistors using orthodox model considering offset charges. First, we fabricate chemically assembled single-electron transistors (SETs) consisting of two Au nanoparticles between electroless Au-plated nanogap electrodes. Then, extraordinary stable Coulomb diamonds in the double-dot SETs are analyzed using the orthodox model, by considering offset charges on the respective quantum dots. We determine the equivalent circuit parameters from Coulomb diamonds and drain current vs. drain voltage curves of the SETs. The accuracies of the capacitances and offset charges on the quantum dots are within ±10%, and ±0.04e (where e is the elementary charge), respectively. The parameters can be explained by the geometrical structures of the SETs observed using scanning electron microscopy images. Using this approach, we are able to understand the spatial characteristics of the double quantum dots, such as the relative distance from the gate electrode and the conditions for adsorption between the nanogap electrodes.

  1. Single and Double ITCZ in Aqua-Planet Models with Globally Uniform Sea Surface Temperature and Solar Insolation: An Interpretation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Chen, Baode; Einaudi, Franco (Technical Monitor)

    2001-01-01

    It has been known for more than a decade that an aqua-planet model with globally uniform sea surface temperature and solar insolation angle can generate ITCZ (intertropical convergence zone). Previous studies have shown that the ITCZ under such model settings can be changed between a single ITCZ over the equator and a double ITCZ straddling the equator through one of several measures. These measures include switching to a different cumulus parameterization scheme, changes within the cumulus parameterization scheme, and changes in other aspects of the model design such as horizontal resolution. In this paper an interpretation for these findings is offered. The latitudinal location of the ITCZ is the latitude where the balance of two types of attraction on the ITCZ, both due to earth's rotation, exists. The first type is equator-ward and is directly related to the earth's rotation and thus not sensitive to model design changes. The second type is poleward and is related to the convective circulation and thus is sensitive to model design changes. Due to the shape of the attractors, the balance of the two types of attractions is reached either at the equator or more than 10 degrees away from the equator. The former case results in a single ITCZ over the equator and the latter case a double ITCZ straddling the equator.

  2. Resistivity due to weak double layers - A model for auroral arc thickness

    NASA Technical Reports Server (NTRS)

    Prakash, Manju; Lysak, Robert L.

    1992-01-01

    We have calculated the resistivity due to a sequence of fluctuating weak double layers aligned parallel to the ambient magnetic field line. The average response of an electron drifting through a 1D randomly oriented array of WDLs is studied using a test particle approach. The average is taken over the randomly fluctuating values of the electric field associated with the double layers. Based on our calculations, we estimate that a 350 eV electron energy the thickness of the visual auroral arc is about 2.5 km and that of the auroral fine structure as about 250 m when mapped down to the ionosphere. The significance of our calculations is discussed in the context of magnetosphere-ionosphere coupling.

  3. Novel tubular switched reluctance motor with double excitation windings: Design, modeling, and experiments.

    PubMed

    Yan, Liang; Li, Wei; Jiao, Zongxia; Chen, I-Ming

    2015-12-01

    The space utilization of linear switched reluctance machine is relatively low, which unavoidably constrains the improvement of system output performance. The objective of this paper is to propose a novel tubular linear switched reluctance motor with double excitation windings. The employment of double excitation helps to increase the electromagnetic force of the system. Furthermore, the installation of windings on both stator and mover can make the structure more compact and increase the system force density. The design concept and operating principle are presented. Following that, the major structure parameters of the system are determined. Subsequently, electromagnetic force and reluctance are formulated analytically based on equivalent magnetic circuits, and the result is validated with numerical computation. Then, a research prototype is developed, and experiments are conducted on the system output performance. It shows that the proposed design of electric linear machine can achieve higher thrust force compared with conventional linear switched reluctance machines.

  4. Modeling double pulsing of ion beams for HEDP target heating experiments

    NASA Astrophysics Data System (ADS)

    Veitzer, Seth; Barnard, John; Stoltz, Peter; Henestroza, Enrique

    2008-04-01

    Recent research on direct drive targets using heavy ion beams suggests optimal coupling will occur when the energy of the ions increases over the course of the pulse. In order to experimentally explore issues involving the interaction of the beam with the outflowing blowoff from the target, double pulse experiments have been proposed whereby a first pulse heats a planar target producing an outflow of material, and a second pulse (˜10 ns later) of higher ion energy (and hence larger projected range) interacts with this outflow before reaching and further heating the target. We report here results for simulations of double pulsing experiments using HYDRA for beam and target parameters relevant to the proposed Neutralized Drift Compression Experiment (NDCX) II at LBNL.

  5. Thermodynamic modelling of a double-effect LiBr-H2O absorption refrigeration cycle

    NASA Astrophysics Data System (ADS)

    Iranmanesh, A.; Mehrabian, M. A.

    2012-12-01

    The goal of this paper is to estimate the conductance of components required to achieve the approach temperatures, and gain insights into a double-effect absorption chiller using LiBr-H2O solution as the working fluid. An in-house computer program is developed to simulate the cycle. Conductance of all components is evaluated based on the approach temperatures assumed as input parameters. The effect of input data on the cycle performance and the exergetic efficiency are investigated.

  6. Operational and convolution properties of two-dimensional Fourier transforms in polar coordinates.

    PubMed

    Baddour, Natalie

    2009-08-01

    For functions that are best described in terms of polar coordinates, the two-dimensional Fourier transform can be written in terms of polar coordinates as a combination of Hankel transforms and Fourier series-even if the function does not possess circular symmetry. However, to be as useful as its Cartesian counterpart, a polar version of the Fourier operational toolset is required for the standard operations of shift, multiplication, convolution, etc. This paper derives the requisite polar version of the standard Fourier operations. In particular, convolution-two dimensional, circular, and radial one dimensional-is discussed in detail. It is shown that standard multiplication/convolution rules do apply as long as the correct definition of convolution is applied.

  7. Convoluted nozzle design for the RL10 derivative 2B engine

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.

  8. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.

    2014-01-01

    This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.

  9. Age, double porosity, and simple reaction modifications for the MOC3D ground-water transport model

    USGS Publications Warehouse

    Goode, Daniel J.

    1999-01-01

    This report documents modifications for the MOC3D ground-water transport model to simulate (a) ground-water age transport; (b) double-porosity exchange; and (c) simple but flexible retardation, decay, and zero-order growth reactions. These modifications are incorporated in MOC3D version 3.0. MOC3D simulates the transport of a single solute using the method-ofcharacteristics numerical procedure. The age of ground water, that is the time since recharge to the saturated zone, can be simulated using the transport model with an additional source term of unit strength, corresponding to the rate of aging. The output concentrations of the model are in this case the ages at all locations in the model. Double porosity generally refers to a separate immobilewater phase within the aquifer that does not contribute to ground-water flow but can affect solute transport through diffusive exchange. The solute mass exchange rate between the flowing water in the aquifer and the immobile-water phase is the product of the concentration difference between the two phases and a linear exchange coefficient. Conceptually, double porosity can approximate the effects of dead-end pores in a granular porous media, or matrix diffusion in a fractured-rock aquifer. Options are provided for decay and zero-order growth reactions within the immobilewater phase. The simple reaction terms here extend the original model, which included decay and retardation. With these extensions, (a) the retardation factor can vary spatially within each model layer, (b) the decay rate coefficient can vary spatially within each model layer and can be different for the dissolved and sorbed phases, and (c) a zero-order growth reaction is added that can vary spatially and can be different in the dissolved and sorbed phases. The decay and growth reaction terms also can change in time to account for changing geochemical conditions during transport. The report includes a description of the theoretical basis of the model, a

  10. Scene Text Detection and Segmentation based on Cascaded Convolution Neural Networks.

    PubMed

    Tang, Youbao; Wu, Xiangqian

    2017-01-20

    Scene text detection and segmentation are two important and challenging research problems in the field of computer vision. This paper proposes a novel method for scene text detection and segmentation based on cascaded convolution neural networks (CNNs). In this method, a CNN based text-aware candidate text region (CTR) extraction model (named detection network, DNet) is designed and trained using both the edges and the whole regions of text, with which coarse CTRs are detected. A CNN based CTR refinement model (named segmentation network, SNet) is then constructed to precisely segment the coarse CTRs into text to get the refined CTRs. With DNet and SNet, much fewer CTRs are extracted than with traditional approaches while more true text regions are kept. The refined CTRs are finally classified using a CNN based CTR classification model (named classification network, CNet) to get the final text regions. All of these CNN based models are modified from VGGNet-16. Extensive experiments on three benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance and greatly outperforms other scene text detection and segmentation approaches.

  11. Remote Sensing Image Fusion with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Zhong, Jinying; Yang, Bin; Huang, Guoyu; Zhong, Fei; Chen, Zhongze

    2016-12-01

    Remote sensing image fusion (RSIF) is referenced as restoring the high-resolution multispectral image from its corresponding low-resolution multispectral (LMS) image aided by the panchromatic (PAN) image. Most RSIF methods assume that the missing spatial details of the LMS image can be obtained from the high resolution PAN image. However, the distortions would be produced due to the much difference between the structural component of LMS image and that of PAN image. Actually, the LMS image can fully utilize its spatial details to improve the resolution. In this paper, a novel two-stage RSIF algorithm is proposed, which makes full use of both spatial details and spectral information of the LMS image itself. In the first stage, the convolutional neural network based super-resolution is used to increase the spatial resolution of the LMS image. In the second stage, Gram-Schmidt transform is employed to fuse the enhanced MS and the PAN images for further improvement the resolution of MS image. Since the spatial resolution enhancement in the first stage, the spectral distortions in the fused image would be decreased in evidence. Moreover, the spatial details can be preserved to construct the fused images. The QuickBird satellite source images are used to test the performances of the proposed method. The experimental results demonstrate that the proposed method can achieve better spatial details and spectral information simultaneously compared with other well-known methods.

  12. Visualizing Flow Over Parametric Surfaces Using Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Forssell, Lisa; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Line Integral Convolution (LIC) is a powerful technique for imaging and animating vector fields. We extend the LIC paradigm in three ways: (1) The existing technique is limited to vector fields over a regular Cartesian grid. We extend it to vector fields over parametric surfaces, such as those found in curvilinear grids, used in computational fluid dynamics simulations; (2) Periodic motion filters can be used to animate the flow visualization. When the flow lies on a parametric surface, however, the motion appears misleading. We explain why this problem arises and show how to adjust the LIC algorithm to handle it; (3) We introduce a technique to visualize vector magnitudes as well as vector direction. Cabral and Leedom have suggested a method for variable-speed animation, which is based on varying the frequency of the filter function. We develop a different technique based on kernel phase shifts which we have found to show substantially better results. Our implementation of these algorithms utilizes texture-mapping hardware to run in real time, which allows them to be included in interactive applications.

  13. Interleaved convolutional coding for the turbulent atmospheric optical communication channel

    NASA Astrophysics Data System (ADS)

    Davidson, Frederic M.; Koh, Yutai T.

    1988-09-01

    The coding gain of a constraint-length-three, rate one-half convolutional code over a long clear-air atmospheric direct-detection optical communication channel using binary pulse-position modulation signaling was directly measured as a function of interleaving delay for both hard- and soft-decision Viterbi decoding. Maximum coding gains theoretically possible for this code with perfect interleaving and physically unrealizable perfect-measurement decoding were about 7 dB under conditions of weak clear-air turbulence, and 11 dB at moderate turbulence levels. The time scale of the fading (memory) of the channel was directly measured to be tens to hundreds of milliseconds, depending on turbulence levels. Interleaving delays of 5 ms between transmission of the first and second channel bits output by the encoder yield coding gains within 1.5 dB of theoretical limits with soft-decision Viterbi decoding. Coding gains of 4-5 dB were observed with only 100 microseconds of interleaving delay. Soft-decision Viterbi decoding always yielded 1-2 dB more coding gain than hard-decision Viterbi decoding.

  14. Multi-resolution Convolution Methodology for ICP Waveform Morphology Analysis.

    PubMed

    Shaw, Martin; Piper, Ian; Hawthorne, Christopher

    2016-01-01

    Intracranial pressure (ICP) monitoring is a key clinical tool in the assessment and treatment of patients in neurointensive care. ICP morphology analysis can be useful in the classification of waveform features.A methodology for the decomposition of an ICP signal into clinically relevant dimensions has been devised that allows the identification of important ICP waveform types. It has three main components. First, multi-resolution convolution analysis is used for the main signal decomposition. Then, an impulse function is created, with multiple parameters, that can represent any form in the signal under analysis. Finally, a simple, localised optimisation technique is used to find morphologies of interest in the decomposed data.A pilot application of this methodology using a simple signal has been performed. This has shown that the technique works with performance receiver operator characteristic area under the curve values for each of the waveform types: plateau wave, B wave and high and low compliance states of 0.936, 0.694, 0.676 and 0.698, respectively.This is a novel technique that showed some promise during the pilot analysis. However, it requires further optimisation to become a usable clinical tool for the automated analysis of ICP signals.

  15. Toward an optimal convolutional neural network for traffic sign recognition

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.

  16. Cell volume regulation in the proximal convoluted tubule.

    PubMed

    Gagnon, J; Ouimet, D; Nguyen, H; Laprade, R; Le Grimellec, C; Carrière, S; Cardinal, J

    1982-10-01

    To evaluate the effect of hyper- and hypotonicity on proximal convoluted tubule (PCT) cell volume, nonperfused PCT were studied in vitro with hypertonic solutions containing sodium chloride, urea, or mannitol (450 mosmol/kg H2O) and with hypotonic low sodium chloride solutions (160 mosmol/kg H2O). When the tubules were subjected to hypertonic peritubular solutions containing NaCl, cell volume immediately decreased by 15.5% and remained constant throughout the experimental period (60 min). With mannitol, the initial decrease was identical to that with NaCl (17.7%), but the PCT volume increased slightly during the experimental period. With urea, the decrease in cell volume was smaller (7%) and transient. In hypotonicity, the PCT swelled rapidly, but this swelling was followed by a rapid regulatory phase in which PCT volume nearly returned to control values after less than 10 min. With a potassium-free peritubular medium or 10(-3) M ouabain, the regulatory phase of hypotonicity completely disappeared, whereas the cells did not maintain their reduced volume in NaCl-induced hypertonicity. These results suggest that Na-K-ATPase plays an important role in the maintenance of a reduced cellular volume in hypertonicity and in the regulatory phase of hypotonicity, probably by an active extrusion of sodium and water from the cell.

  17. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  18. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  19. Deep convolutional neural networks for classifying GPR B-scans

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  20. A deep convolutional neural network for recognizing foods

    NASA Astrophysics Data System (ADS)

    Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec

    2015-12-01

    Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.

  1. Convolutional networks for fast, energy-efficient neuromorphic computing.

    PubMed

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  2. Error Analysis of Padding Schemes for DFT’s of Convolutions and Derivatives

    DTIC Science & Technology

    2012-01-31

    Geodaetica, 18,263-279. Oppenheim AV, Schafer RW (1975) Digital Signal Processing . Prentice-Hall, Inc., Englewood Cliffs, New Jersey. Schwarz KP... Oppenheim and Schäfer (1975). Many numerical tests have been done to show that this so-called zero padding improves the computation of Stokes...and (19) relate linear convolutions to corresponding cyclic convolutions. Equation (19) is the justification, originating in Oppenheim and Schäfer

  3. Homology Requirements for Double-Strand Break-Mediated Recombination in a Phage λ-Td Intron Model System

    PubMed Central

    Parker, M. M.; Court, D. A.; Preiter, K.; Belfort, M.

    1996-01-01

    Many group I introns encode endonucleases that promote intron homing by initiating a double-strand break-mediated homologous recombination event. A td intron-phage λ model system was developed to analyze exon homology effects on intron homing and determine the role of the λ 5'-3' exonuclease complex (Redαβ) in the repair event. Efficient intron homing depended on exon lengths in the 35- to 50-bp range, although homing levels remained significantly elevated above nonbreak-mediated recombination with as little as 10 bp of flanking homology. Although precise intron insertion was demonstrated with extremely limiting exon homology, the complete absence of one exon produced illegitimate events on the side of heterology. Interestingly, intron inheritance was unaffected by the presence of extensive heterology at the double-strand break in wild-type λ, provided that sufficient homology between donor and recipient was present distal to the heterologous sequences. However, these events involving heterologous ends were absolutely dependent on an intact Red exonuclease system. Together these results indicate that heterologous sequences can participate in double-strand break-mediated repair and imply that intron transposition to heteroallelic sites might occur at break sites within regions of limited or no homology. PMID:8807281

  4. Period doubling cascades of prey-predator model with nonlinear harvesting and control of over exploitation through taxation

    NASA Astrophysics Data System (ADS)

    Gupta, R. P.; Banerjee, Malay; Chandra, Peeyush

    2014-07-01

    The present study investigates a prey predator type model for conservation of ecological resources through taxation with nonlinear harvesting. The model uses the harvesting function as proposed by Agnew (1979) [1] which accounts for the handling time of the catch and also the competition between standard vessels being utilized for harvesting of resources. In this paper we consider a three dimensional dynamic effort prey-predator model with Holling type-II functional response. The conditions for uniform persistence of the model have been derived. The existence and stability of bifurcating periodic solution through Hopf bifurcation have been examined for a particular set of parameter value. Using numerical examples it is shown that the system admits periodic, quasi-periodic and chaotic solutions. It is observed that the system exhibits periodic doubling route to chaos with respect to tax. Many forms of complexities such as chaotic bands (including periodic windows, period-doubling bifurcations, period-halving bifurcations and attractor crisis) and chaotic attractors have been observed. Sensitivity analysis is carried out and it is observed that the solutions are highly dependent to the initial conditions. Pontryagin's Maximum Principle has been used to obtain optimal tax policy to maximize the monetary social benefit as well as conservation of the ecosystem.

  5. An inexact double-sided chance-constrained model for air quality management in Nanshan District, Shengzhen, China

    NASA Astrophysics Data System (ADS)

    Shao, Liguo; Xu, Ye; Huang, Guohe

    2014-12-01

    In this study, an inexact double-sided fuzzy-random-chance-constrained programming (IDSFRCCP) model was developed for supporting air quality management of the Nanshan District of Shenzhen, China, under uncertainty. IDSFRCCP is an integrated model incorporating interval linear programming and double-sided fuzzy-random-chance-constrained programming models. It can express uncertain information as both fuzzy random variables and discrete intervals. The proposed model was solved based on the stochastic and fuzzy chance-constrained programming techniques and an interactive two-step algorithm. The air quality management system of Nanshan District, including one pollutant, six emission sources, six treatment technologies and four receptor sites, was used to demonstrate the applicability of the proposed method. The results indicated that the IDSFRCCP was capable of helping decision makers to analyse trade-offs between system cost and risk of constraint violation. The mid-range solutions tending to lower bounds with moderate αh and qi values were recommended as decision alternatives owing to their robust characteristics.

  6. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    SciTech Connect

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-02-15

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.

  7. SU-E-T-607: An Experimental Validation of Gamma Knife Based Convolution Algorithm On Solid Acrylic Anthropomorphic Phantom

    SciTech Connect

    Gopishankar, N; Bisht, R K

    2014-06-01

    Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.

  8. Computationally efficient approach for solving time dependent diffusion equation with discrete temporal convolution applied to granular particles of battery electrodes

    NASA Astrophysics Data System (ADS)

    Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž

    2015-03-01

    The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.

  9. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    PubMed Central

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  10. DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.

    PubMed

    Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou

    2016-07-07

    In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.

  11. Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification.

    PubMed

    Hou, Le; Samaras, Dimitris; Kurc, Tahsin M; Gao, Yi; Davis, James E; Saltz, Joel H

    2016-01-01

    Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN.

  12. Epithelium-stroma classification via convolutional neural networks and unsupervised domain adaptation in histopathological images.

    PubMed

    Huang, Yue; Zheng, Han; Liu, Chi; Ding, Xinghao; Rohde, Gustavo

    2017-04-06

    Epithelium-stroma classification is a necessary preprocessing step in histopathological image analysis. Current deep learning based recognition methods for histology data require collection of large volumes of labeled data in order to train a new neural network when there are changes to the image acquisition procedure. However, it is extremely expensive for pathologists to manually label sufficient volumes of data for each pathology study in a professional manner, which results in limitations in real-world applications. A very simple but effective deep learning method, that introduces the concept of unsupervised domain adaptation to a simple convolutional neural network (CNN), has been proposed in this paper. Inspired by transfer learning, our work assumes that the training data and testing data follow different distributions, and there is an adaptation operation to more accurately estimate the kernels in CNN in feature extraction, in order to enhance performance by transferring knowledge from labeled data in source domain to unlabeled data in target domain. The model has been evaluated using three independent public epithelium-stroma datasets by cross-dataset validations. The experimental results demonstrate that for epithelium-stroma classification, the proposed framework outperforms the state-of-the-art deep neural network model, and it also achieves better performance than other existing deep domain adaptation methods. The proposed model can be considered to be a better option for real-world applications in histopathological image analysis, since there is no longer a requirement for large-scale labeled data in each specified domain.

  13. Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification

    PubMed Central

    Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.

    2016-01-01

    Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN. PMID:27795661

  14. Convolutional neural networks for P300 detection with application to brain-computer interfaces.

    PubMed

    Cecotti, Hubert; Gräser, Axel

    2011-03-01

    A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the direct communication between human and computers by analyzing brain measurements. Oddball paradigms are used in BCI to generate event-related potentials (ERPs), like the P300 wave, on targets selected by the user. A P300 speller is based on this principle, where the detection of P300 waves allows the user to write characters. The P300 speller is composed of two classification problems. The first classification is to detect the presence of a P300 in the electroencephalogram (EEG). The second one corresponds to the combination of different P300 responses for determining the right character to spell. A new method for the detection of P300 waves is presented. This model is based on a convolutional neural network (CNN). The topology of the network is adapted to the detection of P300 waves in the time domain. Seven classifiers based on the CNN are proposed: four single classifiers with different features set and three multiclassifiers. These models are tested and compared on the Data set II of the third BCI competition. The best result is obtained with a multiclassifier solution with a recognition rate of 95.5 percent, without channel selection before the classification. The proposed approach provides also a new way for analyzing brain activities due to the receptive field of the CNN models.

  15. Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection

    NASA Astrophysics Data System (ADS)

    Cabrera-Vives, Guillermo; Reyes, Ignacio; Förster, Francisco; Estévez, Pablo A.; Maureira, Juan-Carlos

    2017-02-01

    We introduce Deep-HiTS, a rotation-invariant convolutional neural network (CNN) model for classifying images of transient candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RFs). We show that our CNN significantly outperforms the RF model, reducing the error by almost half. Furthermore, for a fixed number of approximately 2000 allowed false transient candidates per night, we are able to reduce the misclassified real transients by approximately one-fifth. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope. We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS. Deep-HiTS is licensed under the terms of the GNU General Public License v3.0.

  16. A white-box model of S-shaped and double S-shaped single-species population growth

    PubMed Central

    Kalmykov, Lev V.

    2015-01-01

    Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717

  17. Testing of and model development for double-walled thermal tubular

    SciTech Connect

    Satchwell, R.M.; Johnson, L.A. Jr.

    1992-08-01

    Insulated tubular products have become essential for use in steam injection projects. In a steam injection project, steam is created at the surface by either steam boilers or generators. During this process, steam travels from a boiler through surface lines to the wellhead, down the wellbore to the sandface, and into the reservoir. For some projects to be an economic success, cost must be reduced and oil recoveries must be increased by reducing heat losses in the wellbore. With reduced heats losses, steam generation costs are lowered and higher quality steam can be injected into the formation. To address this need, work under this project consisted of the design and construction of a thermal flow loop, testing a double-walled tubular product that was manufactured by Inter-Mountain Pipe Company, and the development and verification of a thermal hydraulic numerical simulator for steam injection. Four different experimental configurations of the double-walled pipe were tested. These configurations included: (1) bare pipe case, (2) bare pipe case with an applied annular vacuum, (3) insulated annular pipe case, and (4) insulated annular pipe case with an applied annular vacuum. Both the pipe body and coupling were tested with each configuration. The results of the experimental tests showed that the Inter-Mountain Pipe Company double-walled pipe body achieved a 98 percent reduction in heat loss when insulation was applied to the annular portion of the pipe. The application of insulation to the annular portion of the coupling reduced the heat losses by only 6 percent. In tests that specified the use of a vacuum in the annular portion of the pipe, leaks were detected and the vacuum could not be held.

  18. Second-order perturbation corrections to singles and doubles coupled-cluster methods: General theory and application to the valence optimized doubles model

    SciTech Connect

    Gwaltney, Steven R.; Sherrill, C. David; Head-Gordon, Martin; Krylov, Anna I.

    2000-09-01

    We present a general perturbative method for correcting a singles and doubles coupled-cluster energy. The coupled-cluster wave function is used to define a similarity-transformed Hamiltonian, which is partitioned into a zeroth-order part that the reference problem solves exactly plus a first-order perturbation. Standard perturbation theory through second-order provides the leading correction. Applied to the valence optimized doubles (VOD) approximation to the full-valence complete active space self-consistent field method, the second-order correction, which we call (2), captures dynamical correlation effects through external single, double, and semi-internal triple and quadruple substitutions. A factorization approximation reduces the cost of the quadruple substitutions to only sixth order in the size of the molecule. A series of numerical tests are presented showing that VOD(2) is stable and well-behaved provided that the VOD reference is also stable. The second-order correction is also general to standard unwindowed coupled-cluster energies such as the coupled-cluster singles and doubles (CCSD) method itself, and the equations presented here fully define the corresponding CCSD(2) energy. (c) 2000 American Institute of Physics.

  19. Generalization of the Prandtl solution to the case of axisymmetric deformation of materials obeying the double shear model

    NASA Astrophysics Data System (ADS)

    Aleksandrov, S. E.; Goldstein, R. V.

    2012-11-01

    A semianalytic solution of the problem on the compression of an annular layer of a plastic material obeying the double shear model on a cylindrical mandrel is obtained. The approximate statement of boundary conditions, which cannot be satisfied exactly in the framework of the constructed solution, is based on the same assumptions as the statement of the classical plasticity problem of compression of a material layer between rough plates (Prandtl's problem). It is assumed that the maximum friction law is satisfied on the inner surface of the layer. The solution is singular near this surface. The strain rate intensity factor is calculated, and its dependence on the process and material parameters is shown.

  20. Slow molecular motion of different spin probes in a model glycerol—water matrix studied by double modulation ESR

    NASA Astrophysics Data System (ADS)

    Valić, S.; Rakvin, B.; Veksli, Z.; Pečar, S.

    1992-11-01

    The slow molecular motion of several deuterated and undeuterated spin probes differing in size and shape, embedded in a model glycerol—water matrix, have been studied by double-modulated electron spin resonance (DMESR). The DMESR spectra as a function of temperature reveal two motional regions. From the experimental linewidths of both deuterated and undeuterated spin probes in the lower temperature region and simulated data based on the variation of T1 relaxation, two different dynamics of the -CH 3 groups attached to piperidine ring were resolved. Our results indicate that the onset of the whole spin probe motion depends on the type of probe and the matrix density.

  1. Double-blind evaluation of the DKL LifeGuard Model 2

    SciTech Connect

    Murray, D.W.; Spencer, F.W.; Spencer, D.D.

    1998-05-01

    On March 20, 1998, Sandia National Laboratories performed a double-blind test of the DKL LifeGuard human presence detector and tracker. The test was designed to allow the device to search for individuals well within the product`s published operational parameters. The Test Operator of the DKL LifeGuard was provided by the manufacturer and was a high-ranking member of DKL management. The test was developed and implemented to verify the performance of the device as specified by the manufacturer. The device failed to meet its published specifications and it performed no better than random chance.

  2. A generalization of the double-corner-frequency source spectral model and its use in the SCEC BBP validation exercise

    USGS Publications Warehouse

    Boore, David M.; Di Alessandro, Carola; Abrahamson, Norman A.

    2014-01-01

    The stochastic method of simulating ground motions requires the specification of the shape and scaling with magnitude of the source spectrum. The spectral models commonly used are either single-corner-frequency or double-corner-frequency models, but the latter have no flexibility to vary the high-frequency spectral levels for a specified seismic moment. Two generalized double-corner-frequency ω2 source spectral models are introduced, one in which two spectra are multiplied together, and another where they are added. Both models have a low-frequency dependence controlled by the seismic moment, and a high-frequency spectral level controlled by the seismic moment and a stress parameter. A wide range of spectral shapes can be obtained from these generalized spectral models, which makes them suitable for inversions of data to obtain spectral models that can be used in ground-motion simulations in situations where adequate data are not available for purely empirical determinations of ground motions, as in stable continental regions. As an example of the use of the generalized source spectral models, data from up to 40 stations from seven events, plus response spectra at two distances and two magnitudes from recent ground-motion prediction equations, were inverted to obtain the parameters controlling the spectral shapes, as well as a finite-fault factor that is used in point-source, stochastic-method simulations of ground motion. The fits to the data are comparable to or even better than those from finite-fault simulations, even for sites close to large earthquakes.

  3. Seismic body wave separation in volcano-tectonic activity inferred by the Convolutive Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Capuano, Paolo; De Lauro, Enza; De Martino, Salvatore; Falanga, Mariarosaria; Petrosino, Simona

    2015-04-01

    One of the main challenge in volcano-seismological literature is to locate and characterize the source of volcano/tectonic seismic activity. This passes through the identification at least of the onset of the main phases, i.e. the body waves. Many efforts have been made to solve the problem of a clear separation of P and S phases both from a theoretical point of view and developing numerical algorithms suitable for specific cases (see, e.g., Küperkoch et al., 2012). Recently, a robust automatic procedure has been implemented for extracting the prominent seismic waveforms from continuously recorded signals and thus allowing for picking the main phases. The intuitive notion of maximum non-gaussianity is achieved adopting techniques which involve higher-order statistics in frequency domain., i.e, the Convolutive Independent Component Analysis (CICA). This technique is successful in the case of the blind source separation of convolutive mixtures. In seismological framework, indeed, seismic signals are thought as the convolution of a source function with path, site and the instrument response. In addition, time-delayed versions of the same source exist, due to multipath propagation typically caused by reverberations from some obstacle. In this work, we focus on the Volcano Tectonic (VT) activity at Campi Flegrei Caldera (Italy) during the 2006 ground uplift (Ciaramella et al., 2011). The activity was characterized approximately by 300 low-magnitude VT earthquakes (Md < 2; for the definition of duration magnitude, see Petrosino et al. 2008). Most of them were concentrated in distinct seismic sequences with hypocenters mainly clustered beneath the Solfatara-Accademia area, at depths ranging between 1 and 4 km b.s.l.. The obtained results show the clear separation of P and S phases: the technique not only allows the identification of the S-P time delay giving the timing of both phases but also provides the independent waveforms of the P and S phases. This is an enormous

  4. Detailed investigation of Long-Period activity at Campi Flegrei by Convolutive Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.

    2016-04-01

    This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of

  5. Optimization of strongly pumped Yb-doped double-clad fiber lasers using a wide-scope approximate analytical model

    NASA Astrophysics Data System (ADS)

    Mohammed, Ziad; Saghafifar, Hossein

    2014-02-01

    An analytical model based on the rate equations of strongly pumped Yb-doped double-clad fiber laser (DCFLs) is presented. The output power and the distributed laser along the whole fiber have been found. In this paper, most parameters affecting the laser performance have been considered. The influences of scattering losses, pump reflection, output reflectivity, doping concentration and fiber length have been studied. It is shown that for wide ranges of the previous parameters and large variations of the input powers for all types of pumping (forward, backward and two-end), the maximum relative error of the output power would be less than 2.72% when the results are compared with the numerical model. Depending on our analytical model, a simple optimization method has been illustrated for high-power laser oscillators.

  6. Vibratory response of a precision double-multi-layer monochromator positioning system using a generic modeling program with experimental verification.

    SciTech Connect

    Barraza, J.

    1998-07-29

    A generic vibratory response-modeling program has been developed as a tool for designing high-precision optical positioning systems. The systems are modeled as rigid-body structures connected by linear non-rigid elements such as complex actuators and bearings. The full dynamic properties of each non-rigid element are determined experimentally or theoretically, then integrated into the program as inertial and stiffness matrices. Thus, it is possible to have a suite of standardize structural elements for modeling many different positioning systems that use standardized components. This paper will present the application of this program to a double-multi-layer monochromator positioning system that utilizes standardized components. Calculated results are compared to experimental modal analysis results.

  7. A novel platelet-rich arterial thrombosis model in rabbits. Simple, reproducible, and dynamic real-time measurement by using double-opposing inverted-sutures model.

    PubMed

    Shieh, S J; Chiu, H Y; Shi, G Y; Wu, C M; Wang, J C; Chen, C H; Wu, H L

    2001-09-01

    Though numerous animal thrombosis models have been introduced, an easy, reliable, and reproducible arterial thrombosis model remains a continuing challenge prior to a thrombolytic study. In an effort to evaluate the efficiency of various recombinant thrombolytic agents with specific affinity to activated platelets in vivo, we developed a novel double-opposing inverted-sutures model to create a platelet-rich thrombus in the femoral artery of rabbits. The arteriotomy was done semicircumferentially, and variously sized microsurgical sutures were introduced intraluminally in a double-opposing inverted manner. The animals were divided into three groups according to the double-opposing inverted-sutures used: Group 1 with 10-0 nylon (n=6), Group 2 with 9-0 nylon (n=6), and Group 3 with 8-0 nylon (n=22). The superficial epigastric branch was cannulated with a thin polyethylene (PE) tube for intraarterial administration of the studied thrombolytic agent. The blood flow was continuously measured with a real-time ultrasonic flow meter. Within 2 h of installation of the sutures, there was no thrombus formation in either Group 1 or 2. In Group 3, the thrombosis rate was 91% (20 of 22) under a steady baseline flow (with an average of 12.23+/-2.40 ml/min). It was highly statistically significant with a P-value of .0000743 using Fisher's Exact Test. The averaged time to thrombosis was 21.8+/-9.8 min. The ultrasonic flow meter to record the dynamic real-time measurement of blood flow was a guideline for thrombus formation or dissolution, which was correlated with the morphological findings of stenotic status of the vessel detected by the Doppler sonography. The components of the thrombus were proven to be platelet-rich predominant by histological examination via hematoxylin and eosin (H&E) stain and transmission electron microscopy (TEM). To confirm that the double-opposing inverted-sutures model would be useful for a study of thrombolytic agents, we evaluated the effects of

  8. Ammonium Removal from Aqueous Solutions by Clinoptilolite: Determination of Isotherm and Thermodynamic Parameters and Comparison of Kinetics by the Double Exponential Model and Conventional Kinetic Models

    PubMed Central

    Tosun, İsmail

    2012-01-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177

  9. Ammonium removal from aqueous solutions by clinoptilolite: determination of isotherm and thermodynamic parameters and comparison of kinetics by the double exponential model and conventional kinetic models.

    PubMed

    Tosun, Ismail

    2012-03-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  10. Modelling evolution on design-by-contract predicts an origin of Life through an abiotic double-stranded RNA world

    PubMed Central

    de Roos, Albert DG

    2007-01-01

    Background It is generally believed that life first evolved from single-stranded RNA (ssRNA) that both stored genetic information and catalyzed the reactions required for self-replication. Presentation of the hypothesis By modeling early genome evolution on the engineering paradigm design-by-contract, an alternative scenario is presented in which life started with the appearance of double-stranded RNA (dsRNA) as an informational storage molecule while catalytic single-stranded RNA was derived from this dsRNA template later in evolution. Testing the hypothesis It was investigated whether this scenario could be implemented mechanistically by starting with abiotic processes. Double-stranded RNA could be formed abiotically by hybridization of oligoribonucleotides that are subsequently non-enzymatically ligated into a double-stranded chain. Thermal cycling driven by the diurnal temperature cycles could then replicate this dsRNA when strands of dsRNA separate and later rehybridize and ligate to reform dsRNA. A temperature-dependent partial replication of specific regions of dsRNA could produce the first template-based generation of catalytic ssRNA, similar to the developmental gene transcription process. Replacement of these abiotic processes by enzymatic processes would guarantee functional continuity. Further transition from a dsRNA to a dsDNA world could be based on minor mutations in template and substrate recognition sites of an RNA polymerase and would leave all existing processes intact. Implications of the hypothesis Modeling evolution on a design pattern, the 'dsRNA first' hypothesis can provide an alternative mechanistic evolutionary scenario for the origin of our genome that preserves functional continuity. Reviewers This article was reviewed by Anthony Poole, Eugene Koonin and Eugene Shakhnovich PMID:17466073

  11. Gross-Pitaevskii equation for Bose particles in a double-well potential: Two-mode models and beyond

    SciTech Connect

    Ananikian, D.; Bergeman, T.

    2006-01-15

    In this work, our primary goal has been to explore the range of validity of two-mode models for Bose-Einstein condensates in double-well potentials. Our derivation, like others, uses symmetric and antisymmetric condensate basis functions for the Gross-Pitaevskii equation. In what we call an 'improved two-mode model' (I2M), the tunneling coupling energy explicitly includes a nonlinear interaction term, which has been given previously in the literature but not widely appreciated. We show that when the atom number (and hence the extent of the wave function) in each well vary appreciably with time, the nonlinear interaction term produces a temporal change in the tunneling energy or rate, which has not previously been considered to our knowledge. In addition, we obtain a parameter, labeled ''interaction tunneling,'' that produces a decrease of the tunneling energy when the wave functions in the two wells overlap to some extent. Especially for larger values of the nonlinear interaction term, results from this model produce better agreement with numerical solutions of the time-dependent Gross-Pitaevskii equation in one and three dimensions, as compared with models that have no interaction term in the tunneling energy. The usefulness of this model is demonstrated by good agreement with recent experimental results for the tunneling oscillation frequency [Albiez et al., Phys. Rev. Lett. 95, 010402 (2005)]. We also present equations and results for a multimode approach, and use the I2M model to obtain modified equations for the second-quantized version of the Bose-Einstein double-well problem.

  12. Wind-driven, double-gyre, ocean circulation in a reduced-gravity, 2.5-layer, lattice Boltzmann model

    NASA Astrophysics Data System (ADS)

    Zhong, L. H.; Feng, S. D.; Luo, D. H.; Gao, S. T.

    2006-07-01

    A coupled lattice Boltzmann (LB) model with second-order accuracy is applied to the reduced-gravity, shallow water, 2.5-layer model for wind-driven double-gyre ocean circulation. By introducing the second-order integral approximation for the collision operator, the model becomes fully explicit. The Coriolis force and other external forces axe included in the model with second-order accuracy, which is consistent with the discretization accuracy of the LB equation. The feature of the multiple equilibria solutions is found in the numerical experiments under different Reynolds numbers based on this LB scheme. With the Reynolds number increasing from 3000 to 4000, the solution of this model is destabilized from the anti-syminetric double-gyre solution to the subtropic gyre solution and then to the subpolar gyre solution. The transitions between these equilibria. states are also found in some parameter ranges. The time-dependent variability of the circulation based on this LB simulation is also discussed for varying viscosity regimes. The flow of this model exhibits oscillations with different timescales varying from subannual to interannual. The corresponding statistical oscillation modes are obtained by spectral analysis. By analyzing the spatio-temporal structures of these modes, it is found that the subannual oscillation with a 9-month period originates from the barotropic Rossby basin mode. and the interannual oscillations with periods ranging from 1.5 years to 4.6 years originate from the recirculation gyre modes, which include the barotropic and the baroclinic recirculation gyre modes.

  13. Deep convolutional networks for pancreas segmentation in CT imaging

    NASA Astrophysics Data System (ADS)

    Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.

    2015-03-01

    Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

  14. A method for medulloblastoma tumor differentiation based on convolutional neural networks and transfer learning

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Arévalo, John; Judkins, Alexander; Madabhushi, Anant; González, Fabio

    2015-12-01

    Convolutional neural networks (CNN) have been very successful at addressing different computer vision tasks thanks to their ability to learn image representations directly from large amounts of labeled data. Features learned from a dataset can be used to represent images from a different dataset via an approach called transfer learning. In this paper we apply transfer learning to the challenging task of medulloblastoma tumor differentiation. We compare two different CNN models which were previously trained in two different domains (natural and histopathology images). The first CNN is a state-of-the-art approach in computer vision, a large and deep CNN with 16-layers, Visual Geometry Group (VGG) CNN. The second (IBCa-CNN) is a 2-layer CNN trained for invasive breast cancer tumor classification. Both CNNs are used as visual feature extractors of histopathology image regions of anaplastic and non-anaplastic medulloblastoma tumor from digitized whole-slide images. The features from the two models are used, separately, to train a softmax classifier to discriminate between anaplastic and non-anaplastic medulloblastoma image regions. Experimental results show that the transfer learning approach produce competitive results in comparison with the state of the art approaches for IBCa detection. Results also show that features extracted from the IBCa-CNN have better performance in comparison with features extracted from the VGG-CNN. The former obtains 89.8% while the latter obtains 76.6% in terms of average accuracy.

  15. Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

    NASA Astrophysics Data System (ADS)

    Anirudh, Rushil; Thiagarajan, Jayaraman J.; Bremer, Timo; Kim, Hyojin

    2016-03-01

    Early detection of lung nodules is currently the one of the most effective ways to predict and treat lung cancer. As a result, the past decade has seen a lot of focus on computer aided diagnosis (CAD) of lung nodules, whose goal is to efficiently detect, segment lung nodules and classify them as being benign or malignant. Effective detection of such nodules remains a challenge due to their arbitrariness in shape, size and texture. In this paper, we propose to employ 3D convolutional neural networks (CNN) to learn highly discriminative features for nodule detection in lieu of hand-engineered ones such as geometric shape or texture. While 3D CNNs are promising tools to model the spatio-temporal statistics of data, they are limited by their need for detailed 3D labels, which can be prohibitively expensive when compared obtaining 2D labels. Existing CAD methods rely on obtaining detailed labels for lung nodules, to train models, which is also unrealistic and time consuming. To alleviate this challenge, we propose a solution wherein the expert needs to provide only a point label, i.e., the central pixel of of the nodule, and its largest expected size. We use unsupervised segmentation to grow out a 3D region, which is used to train the CNN. Using experiments on the SPIE-LUNGx dataset, we show that the network trained using these weak labels can produce reasonably low false positive rates with a high sensitivity, even in the absence of accurate 3D labels.

  16. A unified analytical drain current model for Double-Gate Junctionless Field-Effect Transistors including short channel effects

    NASA Astrophysics Data System (ADS)

    Raksharam; Dutta, Aloke K.

    2017-04-01

    In this paper, a unified analytical model for the drain current of a symmetric Double-Gate Junctionless Field-Effect Transistor (DG-JLFET) is presented. The operation of the device has been classified into four modes: subthreshold, semi-depleted, accumulation, and hybrid; with the main focus of this work being on the accumulation mode, which has not been dealt with in detail so far in the literature. A physics-based model, using a simplified one-dimensional approach, has been developed for this mode, and it has been successfully integrated with the model for the hybrid mode. It also includes the effect of carrier mobility degradation due to the transverse electric field, which was hitherto missing in the earlier models reported in the literature. The piece-wise models have been unified using suitable interpolation functions. In addition, the model includes two most important short-channel effects pertaining to DG-JLFETs, namely the Drain Induced Barrier Lowering (DIBL) and the Subthreshold Swing (SS) degradation. The model is completely analytical, and is thus computationally highly efficient. The results of our model have shown an excellent match with those obtained from TCAD simulations for both long- and short-channel devices, as well as with the experimental data reported in the literature.

  17. Oxidative stress accelerates amyloid deposition and memory impairment in a double-transgenic mouse model of Alzheimer's disease.

    PubMed

    Kanamaru, Takuya; Kamimura, Naomi; Yokota, Takashi; Iuchi, Katsuya; Nishimaki, Kiyomi; Takami, Shinya; Akashiba, Hiroki; Shitaka, Yoshitsugu; Katsura, Ken-Ichiro; Kimura, Kazumi; Ohta, Shigeo

    2015-02-05

    Oxidative stress is known to play a prominent role in the onset and early stage progression of Alzheimer's disease (AD). For example, protein oxidation and lipid peroxidation levels are increased in patients with mild cognitive impairment. Here, we created a double-transgenic mouse model of AD to explore the pathological and behavioral effects of oxidative stress. Double transgenic (APP/DAL) mice were constructed by crossing Tg2576 (APP) mice, which express a mutant form of human amyloid precursor protein (APP), with DAL mice expressing a dominant-negative mutant of mitochondrial aldehyde dehydrogenase 2 (ALDH2), in which oxidative stress is enhanced. Y-maze and object recognition tests were performed at 3 and 6 months of age to evaluate learning and memory. The accumulation of amyloid plaques, deposition of phosphorylated-tau protein, and number of astrocytes in the brain were assessed histopathologically at 3, 6, 9, and 12-15 months of age. The life span of APP/DAL mice was significantly shorter than that of APP or DAL mice. In addition, they showed accelerated amyloid deposition, tau phosphorylation, and gliosis. Furthermore, these mice showed impaired performance on Y-maze and object recognition tests at 3 months of age. These data suggest that oxidative stress accelerates cognitive dysfunction and pathological insults in the brain. APP/DAL mice could be a useful model for exploring new approaches to AD treatment.

  18. The Quasi-Biennial Oscillation Based on Double Gaussian Distributional Parameterization of Inertial Gravity Waves in WACCM Model

    NASA Astrophysics Data System (ADS)

    Yu, C.; Xue, X.; Dou, X.; Wu, J.

    2015-12-01

    The adjustment of gravity wave parameterization associated with model convection has made possible the spontaneous generation of the quasi-biennial oscillation (QBO) in the Whole Atmosphere Community Climate Model (WACCM 4.0), although there are some mismatching when compared with the observation. The parameterization is based on Lindzen's linear saturation theory which can better describe inertia-gravity waves (IGW) by taking the Coriolis effects into consideration. In this work we improve the parameterization by importing a more realistic double Gaussian distribution IGW spectrum, which is calculated from tropical radiosonde observations. A series of WACCM simulations are performed to determine the relationship between the period and amplitude of equatorial zonal wind oscillations and the feature of parameterized IGW. All of these simulations are capable of generating equatorial wind oscillations in the stratosphere using the standard spatial resolution settings. The period of the oscillation is associate inversely with the strength of the IGW forcing, but the central values of double Gaussian distribution IGW have influence both on the magnitude and period of the oscillation. In fact, the eastward and westward IGWs affect the amplitude of the QBO wind, respectively, and the strength of IGWs forcing determines the accelerating rate of the QBO wind. Furthermore, stronger forcing of IGWs can lead to a deeper propagate of the QBO phase, which can extend the lowest altitude of the constant zonal wind amplitudes to about 100 hPa.

  19. Use of two-dimensional transmission photoelastic models to study stresses in double-lap bolted joints: Load transfer and stresses in the inner lap

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.

    1980-01-01

    The determination of the stress distribution in the inner lap of double-lap, double-bolt joints using photoelastic models of the joint is discussed. The principal idea is to fabricate the inner lap of a photoelastic material and to use a photoelastically sensitive material for the two outer laps. With this setup, polarized light transmitted through the stressed model responds principally to the stressed inner lap. The model geometry, the procedures for making and testing the model, and test results are described.

  20. 1-D seismic velocity model and hypocenter relocation using double difference method around West Papua region

    SciTech Connect

    Sabtaji, Agung E-mail: agung.sabtaji@bmkg.go.id; Nugraha, Andri Dian

    2015-04-24

    West Papua region has fairly high of seismicity activities due to tectonic setting and many inland faults. In addition, the region has a unique and complex tectonic conditions and this situation lead to high potency of seismic hazard in the region. The precise earthquake hypocenter location is very important, which could provide high quality of earthquake parameter information and the subsurface structure in this region to the society. We conducted 1-D P-wave velocity using earthquake data catalog from BMKG for April, 2009 up to March, 2014 around West Papua region. The obtained 1-D seismic velocity then was used as input for improving hypocenter location using double-difference method. The relocated hypocenter location shows fairly clearly the pattern of intraslab earthquake beneath New Guinea Trench (NGT). The relocated hypocenters related to the inland fault are also observed more focus in location around the fault.