Science.gov

Sample records for double convolution model

  1. A digital model for streamflow routing by convolution methods

    USGS Publications Warehouse

    Doyle, W.H., Jr.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.

    1984-01-01

    U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)

  2. A convolution model of rock bed thermal storage units

    NASA Astrophysics Data System (ADS)

    Sowell, E. F.; Curry, R. L.

    1980-01-01

    A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.

  3. Convolution and non convolution Perfectly Matched Layer techniques optimized at grazing incidence for high-order wave propagation modelling

    NASA Astrophysics Data System (ADS)

    Martin, Roland; Komatitsch, Dimitri; Bruthiaux, Emilien; Gedney, Stephen D.

    2010-05-01

    We present and discuss here two different unsplit formulations of the frequency shift PML based on convolution or non convolution integrations of auxiliary memory variables. Indeed, the Perfectly Matched Layer absorbing boundary condition has proven to be very efficient from a numerical point of view for the elastic wave equation to absorb both body waves with non-grazing incidence and surface waves. However, at grazing incidence the classical discrete Perfectly Matched Layer method suffers from large spurious reflections that make it less efficient for instance in the case of very thin mesh slices, in the case of sources located very close to the edge of the mesh, and/or in the case of receivers located at very large offset. In [1] we improve the Perfectly Matched Layer at grazing incidence for the seismic wave equation based on an unsplit convolution technique. This improved PML has a cost that is similar in terms of memory storage to that of the classical PML. We illustrate the efficiency of this improved Convolutional Perfectly Matched Layer based on numerical benchmarks using a staggered finite-difference method on a very thin mesh slice for an isotropic material and show that results are significantly improved compared with the classical Perfectly Matched Layer technique. We also show that, as the classical model, the technique is intrinsically unstable in the case of some anisotropic materials. In this case, retaining an idea of [2], this has been stabilized by adding correction terms adequately along any coordinate axis [3]. More specifically this has been applied to the spectral-element method based on a hybrid first/second order time integration scheme in which the Newmark time marching scheme allows us to match perfectly at the base of the absorbing layer a velocity-stress formulation in the PML and a second order displacement formulation in the inner computational domain.Our CPML unsplit formulation has the advantage to reduce the memory storage of CPML

  4. Forecasting natural aquifer discharge using a numerical model and convolution.

    PubMed

    Boggs, Kevin G; Johnson, Gary S; Van Kirk, Rob; Fairley, Jerry P

    2014-01-01

    If the nature of groundwater sources and sinks can be determined or predicted, the data can be used to forecast natural aquifer discharge. We present a procedure to forecast the relative contribution of individual aquifer sources and sinks to natural aquifer discharge. Using these individual aquifer recharge components, along with observed aquifer heads for each January, we generate a 1-year, monthly spring discharge forecast for the upcoming year with an existing numerical model and convolution. The results indicate that a forecast of natural aquifer discharge can be developed using only the dominant aquifer recharge sources combined with the effects of aquifer heads (initial conditions) at the time the forecast is generated. We also estimate how our forecast will perform in the future using a jackknife procedure, which indicates that the future performance of the forecast is good (Nash-Sutcliffe efficiency of 0.81). We develop a forecast and demonstrate important features of the procedure by presenting an application to the Eastern Snake Plain Aquifer in southern Idaho. PMID:23914881

  5. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  6. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost. PMID:25698012

  7. Gamma convolution models for self-diffusion coefficient distributions in PGSE NMR

    NASA Astrophysics Data System (ADS)

    Röding, Magnus; Williamson, Nathan H.; Nydén, Magnus

    2015-12-01

    We introduce a closed-form signal attenuation model for pulsed-field gradient spin echo (PGSE) NMR based on self-diffusion coefficient distributions that are convolutions of n gamma distributions, n ⩾ 1 . Gamma convolutions provide a general class of uni-modal distributions that includes the gamma distribution as a special case for n = 1 and the lognormal distribution among others as limit cases when n approaches infinity. We demonstrate the usefulness of the gamma convolution model by simulations and experimental data from samples of poly(vinyl alcohol) and polystyrene, showing that this model provides goodness of fit superior to both the gamma and lognormal distributions and comparable to the common inverse Laplace transform.

  8. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    PubMed

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration. PMID:26208308

  9. Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)

    NASA Astrophysics Data System (ADS)

    Long, A. J.

    2009-12-01

    Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.

  10. Fully 3D Particle-in-Cell Simulation of Double Post-Hole Convolute on PTS Facility

    NASA Astrophysics Data System (ADS)

    Zhao, Hailong; Dong, Ye; Zhou, Haijing; Zou, Wenkang; Institute of Fluid Physics Collaboration; Institute of Applied Physics; Computational Mathematics Collaboration

    2015-11-01

    In order to get better understand of energy transforming and converging process during High Energy Density Physics (HEDP) experiments, fully 3D particle-in-cell (PIC) simulation code NEPTUNE3D was used to provide numerical approach towards parameters which could hardly be acquired through diagnostics. Cubic region (34cm × 34cm × 18cm) including the double post-hole convolute (DPHC) on the primary test stand (PTS) facility was chosen to perform a series of fully 3D PIC simulations, calculating ability of codes were tested and preliminary simulation results about DPHC on PTS facility were discussed. Taking advantages of 3D simulation codes and large-scale parallel computation, massive data (~ 250GB) could be acquired in less than 5 hours and clear process of current transforming and electron emission in DPHC were demonstrated with the help of visualization tools. Cold-chamber tests were performed during which only cathode electron emission was considered without temperature rise or ion emission, current loss efficiency was estimated to be 0.46% ~ 0.48% by comparisons between output magnetic field profiles with or without electron emission. Project supported by the National Natural Science Foundation of China (Grant No. 11205145, 11305015, 11475155).

  11. The Luminous Convolution Model for Galaxy Rotation Curves

    NASA Astrophysics Data System (ADS)

    Rubin, Shanon; Mucci, Maria; Sophia Cisneros Collaboration; Kennard Chng Collaboration; Meagan Crowley Collaboration

    2016-03-01

    The LCM takes as input only the observed luminous matter profile from galaxies, and allows us to confirm these observed data by considering frame-dependent effects from the luminous mass profile of the Milky Way. The LCM is useful when looking at galaxies that have similar total enclosed mass, but varying distributions. For example, variations in luminous matter profiles from a diffuse galaxy correlate to the LCM's five different Milky Way models equally well, but LCM fits for a centrally condensed galaxy distinguish between Milky Way models. In this presentation, we show how the rotation curve data of such galaxies can be used to constrain the Milky Way luminous mass modeling, by the physical characteristics of each galaxy used to interpret the fitting. Current Investigations will be presented showing how the convolved parameters of Keplerian predictions with rotation curve observations can be extracted with respect to the crossing location of the relative curvature versus the assumption of the luminous mass profiles from photometry. Since there currently exists no direct constraint to photometric estimates of the luminous mass in these systems, the LCM gives the first constraint based on the orthogonal measurement of Doppler shifted spectra from characteristic emitters.

  12. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    NASA Astrophysics Data System (ADS)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-06-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  13. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  14. Revision of the theory of tracer transport and the convolution model of dynamic contrast enhanced magnetic resonance imaging

    PubMed Central

    Bammer, Roland; Stollberger, Rudolf

    2012-01-01

    Counterexamples are used to motivate the revision of the established theory of tracer transport. Then dynamic contrast enhanced magnetic resonance imaging in particular is conceptualized in terms of a fully distributed convection–diffusion model from which a widely used convolution model is derived using, alternatively, compartmental discretizations or semigroup theory. On this basis, applications and limitations of the convolution model are identified. For instance, it is proved that perfusion and tissue exchange states cannot be identified on the basis of a single convolution equation alone. Yet under certain assumptions, particularly that flux is purely convective at the boundary of a tissue region, physiological parameters such as mean transit time, effective volume fraction, and volumetric flow rate per unit tissue volume can be deduced from the kernel. PMID:17429633

  15. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    SciTech Connect

    Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd

    2011-01-15

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3

  16. SU-E-T-08: A Convolution Model for Head Scatter Fluence in the Intensity Modulated Field

    SciTech Connect

    Chen, M; Mo, X; Chen, Y; Parnell, D; Key, S; Olivera, G; Galmarini, W; Lu, W

    2014-06-01

    Purpose: To efficiently calculate the head scatter fluence for an arbitrary intensity-modulated field with any source distribution using the source occlusion model. Method: The source occlusion model with focal and extra focal radiation (Jaffray et al, 1993) can be used to account for LINAC head scatter. In the model, the fluence map of any field shape at any point can be calculated via integration of the source distribution within the visible range, as confined by each segment, using the detector eye's view. A 2D integration would be required for each segment and each fluence plane point, which is time-consuming, as an intensity-modulated field contains typically tens to hundreds of segments. In this work, we prove that the superposition of the segmental integrations is equivalent to a simple convolution regardless of what the source distribution is. In fact, for each point, the detector eye's view of the field shape can be represented as a function with the origin defined at the point's pinhole reflection through the center of the collimator plane. We were thus able to reduce hundreds of source plane integration to one convolution. We calculated the fluence map for various 3D and IMRT beams and various extra-focal source distributions using both the segmental integration approach and the convolution approach and compared the computation time and fluence map results of both approaches. Results: The fluence maps calculated using the convolution approach were the same as those calculated using the segmental approach, except for rounding errors (<0.1%). While it took considerably longer time to calculate all segmental integrations, the fluence map calculation using the convolution approach took only ∼1/3 of the time for typical IMRT fields with ∼100 segments. Conclusions: The convolution approach for head scatter fluence calculation is fast and accurate and can be used to enhance the online process.

  17. Dose convolution filter: Incorporating spatial dose information into tissue response modeling

    SciTech Connect

    Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay

    2010-03-15

    Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.

  18. The Luminous Convolution Model-The light side of dark matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna

    2014-03-01

    We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.

  19. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer.

    PubMed

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  20. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    PubMed Central

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  1. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    NASA Astrophysics Data System (ADS)

    Starn, J. J.

    2013-12-01

    Particle tracking often is used to generate particle-age distributions that are used as impulse-response functions in convolution. A typical application is to produce groundwater solute breakthrough curves (BTC) at endpoint receptors such as pumping wells or streams. The commonly used semi-analytical particle-tracking algorithm based on the assumption of linear velocity gradients between opposing cell faces is computationally very fast when used in combination with finite-difference models. However, large gradients near pumping wells in regional-scale groundwater-flow models often are not well represented because of cell-size limitations. This leads to inaccurate velocity fields, especially at weak sinks. Accurate analytical solutions for velocity near a pumping well are available, and various boundary conditions can be imposed using image-well theory. Python can be used to embed these solutions into existing semi-analytical particle-tracking codes, thereby maintaining the integrity and quality-assurance of the existing code. Python (and associated scientific computational packages NumPy, SciPy, and Matplotlib) is an effective tool because of its wide ranging capability. Python text processing allows complex and database-like manipulation of model input and output files, including binary and HDF5 files. High-level functions in the language include ODE solvers to solve first-order particle-location ODEs, Gaussian kernel density estimation to compute smooth particle-age distributions, and convolution. The highly vectorized nature of NumPy arrays and functions minimizes the need for computationally expensive loops. A modular Python code base has been developed to compute BTCs using embedded analytical solutions at pumping wells based on an existing well-documented finite-difference groundwater-flow simulation code (MODFLOW) and a semi-analytical particle-tracking code (MODPATH). The Python code base is tested by comparing BTCs with highly discretized synthetic steady

  2. Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling

    PubMed Central

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2010-01-01

    In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it

  3. Age-distribution estimation for karst groundwater: Issues of parameterization and complexity in inverse modeling by convolution

    NASA Astrophysics Data System (ADS)

    Long, Andrew J.; Putnam, Larry D.

    2009-10-01

    SummaryConvolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium ( 3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.

  4. Age-distribution estimation for karst groundwater: Issues of parameterization and complexity in inverse modeling by convolution

    USGS Publications Warehouse

    Long, A.J.; Putnam, L.D.

    2009-01-01

    Convolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium (3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.

  5. Experimental validation of a convolution- based ultrasound image formation model using a planar arrangement of micrometer-scale scatterers.

    PubMed

    Gyöngy, Miklós; Makra, Ákos

    2015-06-01

    The shift-invariant convolution model of ultrasound is widely used in the literature, for instance to generate fast simulations of ultrasound images. However, comparison of the resulting simulations with experiments is either qualitative or based on aggregate descriptors such as envelope statistics or spectral components. In the current work, a planar arrangement of 49-μm polystyrene microspheres was imaged using macrophotography and a 4.7-MHz ultrasound linear array. The macrophotograph allowed estimation of the scattering function (SF) necessary for simulations. Using the coefficient of determination R(2) between real and simulated ultrasound images, different estimates of the SF and point spread function (PSF) were tested. All estimates of the SF performed similarly, whereas the best estimate of the PSF was obtained by Hanningwindowing the deconvolution of the real ultrasound image with the SF: this yielded R(2) = 0.43 for the raw simulated image and R(2) = 0.65 for the envelope-detected ultrasound image. R(2) was highly dependent on microsphere concentration, with values of up to 0.99 for regions with scatterers. The results validate the use of the shift-invariant convolution model for the realistic simulation of ultrasound images. However, care needs to be taken in experiments to reduce the relative effects of other sources of scattering such as from multiple reflections, either by increasing the concentration of imaged scatterers or by more careful experimental design. PMID:26067054

  6. Distal Convoluted Tubule

    PubMed Central

    Ellison, David H.

    2014-01-01

    The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283

  7. A Novel Method for Integrating MEG and BOLD fMRI Signals With the Linear Convolution Model in Human Primary Somatosensory Cortex

    PubMed Central

    Nangini, Cathy; Tam, Fred; Graham, Simon J.

    2016-01-01

    Characterizing the neurovascular coupling between hemodynamic signals and their neural origins is crucial to functional neuroimaging research, even more so as new methods become available for integrating results from different functional neuroimaging modalities. We present a novel method to relate magnetoencephalography (MEG) and BOLD fMRI data from primary somatosensory cortex within the context of the linear convolution model. This model, which relates neural activity to BOLD signal change, has been widely used to predict BOLD signals but typically lacks experimentally derived measurements of neural activity. In this study, an fMRI experiment is performed using variable-duration (≤1 s) vibrotactile stimuli applied at 22 Hz, analogous to a previously published MEG study (Nangini et al., [2006]: Neuroimage 33:252–262), testing whether MEG source waveforms from the previous study can inform the convolution model and improve BOLD signal estimates across all stimulus durations. The typical formulation of the convolution model in which the input is given by the stimulus profile is referred to as Model 1. Model 2 is based on an energy argument relating metabolic demand to the postsynaptic currents largely responsible for the MEG current dipoles, and uses the energy density of the estimated MEG source waveforms as input to the convolution model. It is shown that Model 2 improves the BOLD signal estimates compared to Model 1 under the experimental conditions implemented, suggesting that MEG energy density can be a useful index of hemodynamic activity. PMID:17290370

  8. Double heterojunction bipolar phototransistor model

    NASA Astrophysics Data System (ADS)

    Horak, Michal

    2003-07-01

    An analytical mathematical model of the double heterojunction NpN bipolar phototransistor with abrupt heterojunctions in three terminal configuration is presented. The thermionic-filed emission and diffusion of injected carriers is considered and the Ebers-Moll type relations for the collector and emitter current are obtained. Several steady state characteristics of the phototransistor structure are calculated (optical gain, quantum efficiency, responsivity).

  9. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  10. The all-source Green's function (ASGF) and its applications to storm surge modeling, part I: from the governing equations to the ASGF convolution

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang

    2015-12-01

    In this study, a new method of storm surge modeling is proposed. This method is orders of magnitude faster than the traditional method within the linear dynamics framework. The tremendous enhancement of the computational efficiency results from the use of a pre-calculated all-source Green's function (ASGF), which connects a point of interest (POI) to the rest of the world ocean. Once the ASGF has been pre-calculated, it can be repeatedly used to quickly produce a time series of a storm surge at the POI. Using the ASGF, storm surge modeling can be simplified as its convolution with an atmospheric forcing field. If the ASGF is prepared with the global ocean as the model domain, the output of the convolution is free of the effects of artificial open-water boundary conditions. Being the first part of this study, this paper presents mathematical derivations from the linearized and depth-averaged shallow-water equations to the ASGF convolution, establishes various auxiliary concepts that will be useful throughout the study, and interprets the meaning of the ASGF from different perspectives. This paves the way for the ASGF convolution to be further developed as a data-assimilative regression model in part II. Five Appendixes provide additional details about the algorithm and the MATLAB functions.

  11. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  12. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  13. A Convolutional Subunit Model for Neuronal Responses in Macaque V1

    PubMed Central

    Vintch, Brett; Movshon, J. Anthony

    2015-01-01

    The response properties of neurons in the early stages of the visual system can be described using the rectified responses of a set of self-similar, spatially shifted linear filters. In macaque primary visual cortex (V1), simple cell responses can be captured with a single filter, whereas complex cells combine a set of filters, creating position invariance. These filters cannot be estimated using standard methods, such as spike-triggered averaging. Subspace methods like spike-triggered covariance can recover multiple filters but require substantial amounts of data, and recover an orthogonal basis for the subspace in which the filters reside, rather than the filters themselves. Here, we assume a linear-nonlinear-linear-nonlinear (LN-LN) cascade model in which the first LN stage consists of shifted (“convolutional”) copies of a single filter, followed by a common instantaneous nonlinearity. We refer to these initial LN elements as the “subunits” of the receptive field, and we allow two independent sets of subunits, each with its own filter and nonlinearity. The second linear stage computes a weighted sum of the subunit responses and passes the result through a final instantaneous nonlinearity. We develop a procedure to directly fit this model to electrophysiological data. When fit to data from macaque V1, the subunit model significantly outperforms three alternatives in terms of cross-validated accuracy and efficiency, and provides a robust, biologically plausible account of receptive field structure for all cell types encountered in V1. SIGNIFICANCE STATEMENT We present a new subunit model for neurons in primary visual cortex that significantly outperforms three alternative models in terms of cross-validated accuracy and efficiency, and provides a robust and biologically plausible account of the receptive field structure in these neurons across the full spectrum of response properties. PMID:26538653

  14. Hypertrophy in the Distal Convoluted Tubule of an 11β-Hydroxysteroid Dehydrogenase Type 2 Knockout Model.

    PubMed

    Hunter, Robert W; Ivy, Jessica R; Flatman, Peter W; Kenyon, Christopher J; Craigie, Eilidh; Mullins, Linda J; Bailey, Matthew A; Mullins, John J

    2015-07-01

    Na(+) transport in the renal distal convoluted tubule (DCT) by the thiazide-sensitive NaCl cotransporter (NCC) is a major determinant of total body Na(+) and BP. NCC-mediated transport is stimulated by aldosterone, the dominant regulator of chronic Na(+) homeostasis, but the mechanism is controversial. Transport may also be affected by epithelial remodeling, which occurs in the DCT in response to chronic perturbations in electrolyte homeostasis. Hsd11b2(-/-) mice, which lack the enzyme 11β-hydroxysteroid dehydrogenase type 2 (11βHSD2) and thus exhibit the syndrome of apparent mineralocorticoid excess, provided an ideal model in which to investigate the potential for DCT hypertrophy to contribute to Na(+) retention in a hypertensive condition. The DCTs of Hsd11b2(-/-) mice exhibited hypertrophy and hyperplasia and the kidneys expressed higher levels of total and phosphorylated NCC compared with those of wild-type mice. However, the striking structural and molecular phenotypes were not associated with an increase in the natriuretic effect of thiazide. In wild-type mice, Hsd11b2 mRNA was detected in some tubule segments expressing Slc12a3, but 11βHSD2 and NCC did not colocalize at the protein level. Thus, the phosphorylation status of NCC may not necessarily equate to its activity in vivo, and the structural remodeling of the DCT in the knockout mouse may not be a direct consequence of aberrant corticosteroid signaling in DCT cells. These observations suggest that the conventional concept of mineralocorticoid signaling in the DCT should be revised to recognize the complexity of NCC regulation by corticosteroids. PMID:25349206

  15. Asymmetric quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    La Guardia, Giuliano G.

    2016-01-01

    In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.

  16. Artificial convolution neural network techniques and applications for lung nodule detection.

    PubMed

    Lo, S B; Lou, S A; Lin, J S; Freedman, M T; Chien, M V; Mun, S K

    1995-01-01

    We have developed a double-matching method and an artificial visual neural network technique for lung nodule detection. This neural network technique is generally applicable to the recognition of medical image pattern in gray scale imaging. The structure of the artificial neural net is a simplified network structure of human vision. The fundamental operation of the artificial neural network is local two-dimensional convolution rather than full connection with weighted multiplication. Weighting coefficients of the convolution kernels are formed by the neural network through backpropagated training. In addition, we modeled radiologists' reading procedures in order to instruct the artificial neural network to recognize the image patterns predefined and those of interest to experts in radiology. We have tested this method for lung nodule detection. The performance studies have shown the potential use of this technique in a clinical setting. This program first performed an initial nodule search with high sensitivity in detecting round objects using a sphere template double-matching technique. The artificial convolution neural network acted as a final classifier to determine whether the suspected image block contains a lung nodule. The total processing time for the automatic detection of lung nodules using both prescan and convolution neural network evaluation was about 15 seconds in a DEC Alpha workstation. PMID:18215875

  17. Understanding deep convolutional networks.

    PubMed

    Mallat, Stéphane

    2016-04-13

    Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183

  18. Do a bit more with convolution.

    PubMed

    Olsthoorn, Theo N

    2008-01-01

    Convolution is a form of superposition that efficiently deals with input varying arbitrarily in time or space. It works whenever superposition is applicable, that is, for linear systems. Even though convolution is well-known since the 19th century, this valuable method is still missing in most textbooks on ground water hydrology. This limits widespread application in this field. Perhaps most papers are too complex mathematically as they tend to focus on the derivation of analytical expressions rather than solving practical problems. However, convolution is straightforward with standard mathematical software or even a spreadsheet, as is demonstrated in the paper. The necessary system responses are not limited to analytic solutions; they may also be obtained by running an already existing ground water model for a single stress period until equilibrium is reached. With these responses, high-resolution time series of head or discharge may then be computed by convolution for arbitrary points and arbitrarily varying input, without further use of the model. There are probably thousands of applications in the field of ground water hydrology that may benefit from convolution. Therefore, its inclusion in ground water textbooks and courses is strongly needed. PMID:18181860

  19. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  20. On models of double porosity poroelastic media

    NASA Astrophysics Data System (ADS)

    Boutin, Claude; Royer, Pascale

    2015-12-01

    This paper focuses on the modelling of fluid-filled poroelastic double porosity media under quasi-static and dynamic regimes. The double porosity model is derived from a two-scale homogenization procedure, by considering a medium locally characterized by blocks of poroelastic Biot microporous matrix and a surrounding system of fluid-filled macropores or fractures. The derived double porosity description is a two-pressure field poroelastic model with memory and viscoelastic effects. These effects result from the `time-dependent' interaction between the pressure fields in the two pore networks. It is shown that this homogenized double porosity behaviour arises when the characteristic time of consolidation in the microporous domain is of the same order of magnitude as the macroscopic characteristic time of transient regime. Conversely, single porosity behaviours occur when both timescales are clearly distinct. Moreover, it is established that the phenomenological approaches that postulate the coexistence of two pressure fields in `instantaneous' interaction only describe media with two pore networks separated by an interface flow barrier. Hence, they fail at predicting and reproducing the behaviour of usual double porosity media. Finally, the results are illustrated for the case of stratified media.

  1. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  2. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  3. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  4. Determinate-state convolutional codes

    NASA Technical Reports Server (NTRS)

    Collins, O.; Hizlan, M.

    1991-01-01

    A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.

  5. A double pendulum model of tennis strokes

    NASA Astrophysics Data System (ADS)

    Cross, Rod

    2011-05-01

    The physics of swinging a tennis racquet is examined by modeling the forearm and the racquet as a double pendulum. We consider differences between a forehand and a serve, and show how they differ from the swing of a bat and a golf club. It is also shown that the swing speed of a racquet, like that of a bat or a club, depends primarily on its moment of inertia rather than on its mass.

  6. Entanglement-assisted quantum convolutional coding

    SciTech Connect

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  7. Double multiple streamtube model with recent improvements

    SciTech Connect

    Paraschivoiu, I.; Delclaux, F.

    1983-05-01

    The objective of the present paper is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.

  8. Double multiple streamtube model with recent improvements

    SciTech Connect

    Paraschivoiu, I.; Delclaux, F.

    1983-05-01

    The objective is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.

  9. Spatio-spectral concentration of convolutions

    NASA Astrophysics Data System (ADS)

    Hanasoge, Shravan M.

    2016-05-01

    Differential equations may possess coefficients that vary on a spectrum of scales. Because coefficients are typically multiplicative in real space, they turn into convolution operators in spectral space, mixing all wavenumbers. However, in many applications, only the largest scales of the solution are of interest and so the question turns to whether it is possible to build effective coarse-scale models of the coefficients in such a manner that the large scales of the solution are left intact. Here we apply the method of numerical homogenisation to deterministic linear equations to generate sub-grid-scale models of coefficients at desired frequency cutoffs. We use the Fourier basis to project, filter and compute correctors for the coefficients. The method is tested in 1D and 2D scenarios and found to reproduce the coarse scales of the solution to varying degrees of accuracy depending on the cutoff. We relate this method to mode-elimination Renormalisation Group (RG) and discuss the connection between accuracy and the cutoff wavenumber. The tradeoff is governed by a form of the uncertainty principle for convolutions, which states that as the convolution operator is squeezed in the spectral domain, it broadens in real space. As a consequence, basis sparsity is a high virtue and the choice of the basis can be critical.

  10. Standard Model as a Double Field Theory.

    PubMed

    Choi, Kang-Sin; Park, Jeong-Hyuck

    2015-10-23

    We show that, without any extra physical degree introduced, the standard model can be readily reformulated as a double field theory. Consequently, the standard model can couple to an arbitrary stringy gravitational background in an O(4,4) T-duality covariant manner and manifest two independent local Lorentz symmetries, Spin(1,3)×Spin(3,1). While the diagonal gauge fixing of the twofold spin groups leads to the conventional formulation on the flat Minkowskian background, the enhanced symmetry makes the standard model more rigid, and also stringy, than it appeared. The CP violating θ term may no longer be allowed by the symmetry, and hence the strong CP problem can be solved. There are now stronger constraints imposed on the possible higher order corrections. We speculate that the quarks and the leptons may belong to the two different spin classes. PMID:26551099

  11. Standard Model as a Double Field Theory

    NASA Astrophysics Data System (ADS)

    Choi, Kang-Sin; Park, Jeong-Hyuck

    2015-10-01

    We show that, without any extra physical degree introduced, the standard model can be readily reformulated as a double field theory. Consequently, the standard model can couple to an arbitrary stringy gravitational background in an O (4 ,4 ) T -duality covariant manner and manifest two independent local Lorentz symmetries, Spin(1 ,3 )×Spin(3 ,1 ) . While the diagonal gauge fixing of the twofold spin groups leads to the conventional formulation on the flat Minkowskian background, the enhanced symmetry makes the standard model more rigid, and also stringy, than it appeared. The C P violating θ term may no longer be allowed by the symmetry, and hence the strong C P problem can be solved. There are now stronger constraints imposed on the possible higher order corrections. We speculate that the quarks and the leptons may belong to the two different spin classes.

  12. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  13. Modeling interconnect corners under double patterning misalignment

    NASA Astrophysics Data System (ADS)

    Hyun, Daijoon; Shin, Youngsoo

    2016-03-01

    Publisher's Note: This paper, originally published on March 16th, was replaced with a corrected/revised version on March 28th. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Interconnect corners should accurately reflect the effect of misalingment in LELE double patterning process. Misalignment is usually considered separately from interconnect structure variations; this incurs too much pessimism and fails to reflect a large increase in total capacitance for asymmetric interconnect structure. We model interconnect corners by taking account of misalignment in conjunction with interconnect structure variations; we also characterize misalignment effect more accurately by handling metal pitch at both sides of a target metal independently. Identifying metal space at both sides of a target metal.

  14. Some easily analyzable convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.

    1989-01-01

    Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.

  15. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. PMID:25360586

  16. Double Higgs boson production in the models with isotriplets

    SciTech Connect

    Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.

    2015-12-15

    The enhancement of double Higgs boson production in the extensions of the Standard Model with extra isotriplets is studied. It is found that in see-saw type II model decays of new heavy Higgs can contribute to the double Higgs production cross section as much as Standard Model channels. In Georgi–Machacek model the cross section can be much larger since the custodial symmetry is preserved and the strongest limitation on triplet parameters is removed.

  17. Approximating large convolutions in digital images.

    PubMed

    Mount, D M; Kanungo, T; Netanyahu, N S; Piatko, C; Silverman, R; Wu, A Y

    2001-01-01

    Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m x n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. PMID:18255522

  18. The Convolution Method in Neutrino Physics Searches

    SciTech Connect

    Tsakstara, V.; Kosmas, T. S.; Chasioti, V. C.; Divari, P. C.; Sinatkas, J.

    2007-12-26

    We concentrate on the convolution method used in nuclear and astro-nuclear physics studies and, in particular, in the investigation of the nuclear response of various neutrino detection targets to the energy-spectra of specific neutrino sources. Since the reaction cross sections of the neutrinos with nuclear detectors employed in experiments are extremely small, very fine and fast convolution techniques are required. Furthermore, sophisticated de-convolution methods are also needed whenever a comparison between calculated unfolded cross sections and existing convoluted results is necessary.

  19. Generalized Valon Model for Double Parton Distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-03-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  20. Generalized Valon Model for Double Parton Distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-06-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  1. 21. INTERIOR, DOUBLE STAIRWAY LEADING TO MODEL HALL, DETAIL OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. INTERIOR, DOUBLE STAIRWAY LEADING TO MODEL HALL, DETAIL OF ONE FLIGHT (5 x 7 negative; 8 x 10 print) - Patent Office Building, Bounded by Seventh, Ninth, F & G Streets, Northwest, Washington, District of Columbia, DC

  2. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  3. Runge-Kutta based generalized convolution quadrature

    NASA Astrophysics Data System (ADS)

    Lopez-Fernandez, Maria; Sauter, Stefan

    2016-06-01

    We present the Runge-Kutta generalized convolution quadrature (gCQ) with variable time steps for the numerical solution of convolution equations for time and space-time problems. We present the main properties of the method and a convergence result.

  4. Symbol synchronization in convolutionally coded systems

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  5. Rolling-Convolute Joint For Pressurized Glove

    NASA Technical Reports Server (NTRS)

    Kosmo, Joseph J.; Bassick, John W.

    1994-01-01

    Rolling-convolute metacarpal/finger joint enhances mobility and flexibility of pressurized glove. Intended for use in space suit to increase dexterity and decrease wearer's fatigue. Also useful in diving suits and other pressurized protective garments. Two ring elements plus bladder constitute rolling-convolute joint balancing torques caused by internal pressurization of glove. Provides comfortable grasp of various pieces of equipment.

  6. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  7. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  8. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  9. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  10. A simple pharmacokinetics subroutine for modeling double peak phenomenon.

    PubMed

    Mirfazaelian, Ahmad; Mahmoudian, Massoud

    2006-04-01

    Double peak absorption has been described with several orally administered drugs. Numerous reasons have been implicated in causing the double peak. DRUG-KNT--a pharmacokinetic software developed previously for fitting one and two compartment kinetics using the iterative curve stripping method--was modified and a revised subroutine was incorporated to solve double-peak models. This subroutine considers the double peak as two hypothetical doses administered with a time gap. The fitting capability of the presented model was verified using four sets of data showing double peak profiles extracted from the literature (piroxicam, ranitidine, phenazopyridine and talinolol). Visual inspection and statistical diagnostics showed that the present algorithm provided adequate curve fit disregarding the mechanism involved in the emergence of the secondary peaks. Statistical diagnostic parameters (RSS, AIC and R2) generally showed good fitness in the plasma profile prediction by this model. It was concluded that the algorithm presented herein provides adequate predicted curves in cases of the double peak phenomenon. PMID:16400712

  11. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    SciTech Connect

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that

  12. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    NASA Astrophysics Data System (ADS)

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.

    2014-12-01

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator's vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator's vacuum-insulator stack (at a radius of 1.6 m) by using standard D -dot and B -dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator's magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z . These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient

  13. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    DOE PAGESBeta

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; et al

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed

  14. Dosimetric comparison of Acuros XB deterministic radiation transport method with Monte Carlo and model-based convolution methods in heterogeneous media

    PubMed Central

    Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas

    2011-01-01

    Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10

  15. A Unimodal Model for Double Observer Distance Sampling Surveys

    PubMed Central

    Becker, Earl F.; Christ, Aaron M.

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied. PMID:26317984

  16. Double soft theorems and shift symmetry in nonlinear sigma models

    NASA Astrophysics Data System (ADS)

    Low, Ian

    2016-02-01

    We show that both the leading and subleading double soft theorems of the nonlinear sigma model follow from a shift symmetry enforcing Adler's zero condition in the presence of an unbroken global symmetry. They do not depend on the underlying coset G /H and are universal infrared behaviors of Nambu-Goldstone bosons. Although nonlinear sigma models contain an infinite number of interaction vertices, the double soft limit is determined entirely by a single four-point interaction, together with the existence of Adler's zeros.

  17. Resonances and period doubling in the pulsations of stellar models

    NASA Astrophysics Data System (ADS)

    Moskalik, Pawel; Buchler, J. Robert

    1990-06-01

    The nonlinear pulsational behavior of several sequences of state-of-the-art Cepheid models is computed with a numerical hydrodynamics code. These sequences exhibit period doubling as the control parameter, the effective temperature, is changed. By following the evolution of the Floquet stability coefficients of the periodic pulsations, this period doubling is identified with the destabilization of a vibrational overtone mode through a resonance of the type (2n + 1) omega (0) equal to about 2 omega (k) (n integer). In the weakly dissipative Population I Cepheids, only a single period doubling and subsequent undoubling is observed, whereas in the case of the strongly dissipative Population II Cepheids, a cascade of period doublings and chaos can occur. The basic properties of the period doubling bifurcation are examined within the amplitude equation formalism, leaving little doubt about the resonance origin of the phenomenon. A simple model system to two coupled nonlinear oscillators which mimics the behavior of the complicated stellar models is also analyzed.

  18. A review of molecular modelling of electric double layer capacitors.

    PubMed

    Burt, Ryan; Birkett, Greg; Zhao, X S

    2014-04-14

    Electric double-layer capacitors are a family of electrochemical energy storage devices that offer a number of advantages, such as high power density and long cyclability. In recent years, research and development of electric double-layer capacitor technology has been growing rapidly, in response to the increasing demand for energy storage devices from emerging industries, such as hybrid and electric vehicles, renewable energy, and smart grid management. The past few years have witnessed a number of significant research breakthroughs in terms of novel electrodes, new electrolytes, and fabrication of devices, thanks to the discovery of innovative materials (e.g. graphene, carbide-derived carbon, and templated carbon) and the availability of advanced experimental and computational tools. However, some experimental observations could not be clearly understood and interpreted due to limitations of traditional theories, some of which were developed more than one hundred years ago. This has led to significant research efforts in computational simulation and modelling, aimed at developing new theories, or improving the existing ones to help interpret experimental results. This review article provides a summary of research progress in molecular modelling of the physical phenomena taking place in electric double-layer capacitors. An introduction to electric double-layer capacitors and their applications, alongside a brief description of electric double layer theories, is presented first. Second, molecular modelling of ion behaviours of various electrolytes interacting with electrodes under different conditions is reviewed. Finally, key conclusions and outlooks are given. Simulations on comparing electric double-layer structure at planar and porous electrode surfaces under equilibrium conditions have revealed significant structural differences between the two electrode types, and porous electrodes have been shown to store charge more efficiently. Accurate electrolyte and

  19. Bernoulli convolutions and 1D dynamics

    NASA Astrophysics Data System (ADS)

    Kempton, Tom; Persson, Tomas

    2015-10-01

    We describe a family {φλ} of dynamical systems on the unit interval which preserve Bernoulli convolutions. We show that if there are parameter ranges for which these systems are piecewise convex, then the corresponding Bernoulli convolution will be absolutely continuous with bounded density. We study the systems {φλ} and give some numerical evidence to suggest values of λ for which {φλ} may be piecewise convex.

  20. 2D quantum double models from a 3D perspective

    NASA Astrophysics Data System (ADS)

    Bernabé Ferreira, Miguel Jorge; Padmanabhan, Pramod; Teotonio-Sobrinho, Paulo

    2014-09-01

    In this paper we look at three dimensional (3D) lattice models that are generalizations of the state sum model used to define the Kuperberg invariant of 3-manifolds. The partition function is a scalar constructed as a tensor network where the building blocks are tensors given by the structure constants of an involutory Hopf algebra A. These models are very general and are hard to solve in its entire parameter space. One can obtain familiar models, such as ordinary gauge theories, by letting A be the group algebra {C}(G) of a discrete group G and staying on a certain region of the parameter space. We consider the transfer matrix of the model and show that quantum double Hamiltonians are derived from a particular choice of the parameters. Such a construction naturally leads to the star and plaquette operators of the quantum double Hamiltonians, of which the toric code is a special case when A={C}({{{Z}}_{2}}). This formulation is convenient to study ground states of these generalized quantum double models where they can naturally be interpreted as tensor network states. For a surface Σ, the ground state degeneracy is determined by the Kuperberg 3-manifold invariant of \\Sigma \\times {{S}^{1}}. It is also possible to obtain extra models by simply enlarging the allowed parameter space but keeping the solubility of the model. While some of these extra models have appeared before in the literature, our 3D perspective allows for an uniform description of them.

  1. A Digital Synthesis Model of Double-Reed Wind Instruments

    NASA Astrophysics Data System (ADS)

    Guillemain, Ph.

    2004-12-01

    We present a real-time synthesis model for double-reed wind instruments based on a nonlinear physical model. One specificity of double-reed instruments, namely, the presence of a confined air jet in the embouchure, for which a physical model has been proposed recently, is included in the synthesis model. The synthesis procedure involves the use of the physical variables via a digital scheme giving the impedance relationship between pressure and flow in the time domain. Comparisons are made between the behavior of the model with and without the confined air jet in the case of a simple cylindrical bore and that of a more realistic bore, the geometry of which is an approximation of an oboe bore.

  2. The Double Homunculus model of self-reflective systems.

    PubMed

    Sawa, Koji; Igamberdiev, Abir U

    2016-06-01

    Vladimir Lefebvre introduced the principles of self-reflective systems and proposed the model to describe consciousness based on these principles (Lefebvre V.A., 1992, J. Math. Psychol. 36, 100-128). The main feature of the model is an assumption of "the image of the self in the image of the self", that is, "a Double Homunculus". In this study, we further formalize the Lefebvre's formulation by using difference equations for the description of self-reflection. In addition, we also implement a dialogue model between the two homunculus agents. The dialogue models show the necessity of both exchange of information and observation of object. We conclude that the Double Homunculus model represents the most adequate description of conscious systems and has a significant potential for describing interactions of reflective agents in the social environment and their ability to perceive the outside world. PMID:27000722

  3. A Simple Double-Source Model for Interference of Capillaries

    ERIC Educational Resources Information Center

    Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua

    2012-01-01

    A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…

  4. Modeling of electrochemical double layers in thermodynamic non-equilibrium.

    PubMed

    Dreyer, Wolfgang; Guhlke, Clemens; Müller, Rüdiger

    2015-10-28

    We consider the contact between an electrolyte and a solid electrode. At first we formulate a thermodynamic consistent model that resolves boundary layers at interfaces. The model includes charge transport, diffusion, chemical reactions, viscosity, elasticity and polarization under isothermal conditions. There is a coupling between these phenomena that particularly involves the local pressure in the electrolyte. Therefore the momentum balance is of major importance for the correct description of the boundary layers. The width of the boundary layers is typically very small compared to the macroscopic dimensions of the system. In the second step we thus apply the method of asymptotic analysis to derive a simpler reduced bulk model that already incorporates the electrochemical properties of the double layers into a set of new boundary conditions. With the reduced model, we analyze the double layer capacitance for a metal-electrolyte interface. PMID:26415592

  5. Application of the double absorbing boundary condition in seismic modeling

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Xiang-Yang; Chen, Shuang-Quan

    2015-03-01

    We apply the newly proposed double absorbing boundary condition (DABC) (Hagstrom et al., 2014) to solve the boundary reflection problem in seismic finite-difference (FD) modeling. In the DABC scheme, the local high-order absorbing boundary condition is used on two parallel artificial boundaries, and thus double absorption is achieved. Using the general 2D acoustic wave propagation equations as an example, we use the DABC in seismic FD modeling, and discuss the derivation and implementation steps in detail. Compared with the perfectly matched layer (PML), the complexity decreases, and the stability and flexibility improve. A homogeneous model and the SEG salt model are selected for numerical experiments. The results show that absorption using the DABC is considerably improved relative to the Clayton-Engquist boundary condition and nearly the same as that in the PML.

  6. Multilabel Image Annotation Based on Double-Layer PLSA Model

    PubMed Central

    Zhang, Jing; Li, Da; Hu, Weiwei; Chen, Zhihua; Yuan, Yubo

    2014-01-01

    Due to the semantic gap between visual features and semantic concepts, automatic image annotation has become a difficult issue in computer vision recently. We propose a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA) in this paper. The new double-layer PLSA model is constructed to bridge the low-level visual features and high-level semantic concepts of images for effective image understanding. The low-level features of images are represented as visual words by Bag-of-Words model; latent semantic topics are obtained by the first layer PLSA from two aspects of visual and texture, respectively. Furthermore, we adopt the second layer PLSA to fuse the visual and texture latent semantic topics and achieve a top-layer latent semantic topic. By the double-layer PLSA, the relationships between visual features and semantic concepts of images are established, and we can predict the labels of new images by their low-level features. Experimental results demonstrate that our automatic image annotation model based on double-layer PLSA can achieve promising performance for labeling and outperform previous methods on standard Corel dataset. PMID:24999490

  7. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  8. A simple double-source model for interference of capillaries

    NASA Astrophysics Data System (ADS)

    Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua

    2012-01-01

    A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An inverse proportionality between the fringes spacing and the capillary radius is derived based on the simple double-source model. This can provide an efficient and precise method to measure a small capillary diameter of micrometre scale. This model could be useful because it presents a fresh perspective on the diffraction of light from a particular geometry (transparent cylinder), which is not straightforward for undergraduates. It also offers an alternative interferometer to perform a different type of measurement, especially for using virtual sources.

  9. Shell model predictions for 124Sn double-β decay

    NASA Astrophysics Data System (ADS)

    Horoi, Mihai; Neacsu, Andrei

    2016-02-01

    Neutrinoless double-β (0 ν β β ) decay is a promising beyond standard model process. Two-neutrino double-β (2 ν β β ) decay is an associated process that is allowed by the standard model, and it was observed in about 10 isotopes, including decays to the excited states of the daughter. 124Sn was the first isotope whose double-β decay modes were investigated experimentally, and despite few other recent efforts, no signal has been seen so far. Shell model calculations were able to make reliable predictions for 2 ν β β decay half-lives. Here we use shell model calculations to predict the 2 ν β β decay half-life of 124Sn. Our results are quite different from the existing quasiparticle random-phase approximation results, and we envision that they will be useful for guiding future experiments. We also present shell model nuclear matrix elements for two potentially competing mechanisms to the 0 ν β β decay of 124Sn.

  10. Non-commutativity from the double sigma model

    NASA Astrophysics Data System (ADS)

    Polyakov, Dimitri; Wang, Peng; Wu, Houwen; Yang, Haitang

    2015-03-01

    We show how non-commutativity arises from commutativity in the double sigma model. We demonstrate that this model is intrinsically non-commutative by calculating the propagators. In the simplest phase configuration, there are two dual copies of commutative theories. In general rotated frames, one gets a non-commutative theory and a commutative partner. Thus a non-vanishing B also leads to a commutative theory. Our results imply that O( D, D) symmetry unifies not only the big and small torus physics, but also the commutative and non-commutative theories. The physical interpretations of the metric and other parameters in the double sigma model are completely dictated by the boundary conditions. The open-closed relation is also an O( D, D) rotation and naturally leads to the Seiberg-Witten map. Moreover, after applying a second dual rotation, we identify the description parameter in the Seiberg-Witten map as an O( D, D) group parameter and all theories are non-commutative under this composite rotation. As a bonus, the propagators of general frames in double sigma model for open string are also presented.

  11. Double scaling in tensor models with a quartic interaction

    NASA Astrophysics Data System (ADS)

    Dartois, Stéphane; Gurau, Razvan; Rivasseau, Vincent

    2013-09-01

    In this paper we identify and analyze in detail the subleading contributions in the 1 /N expansion of random tensors, in the simple case of a quartically interacting model. The leading order for this 1 /N expansion is made of graphs, called melons, which are dual to particular triangulations of the D-dimensional sphere, closely related to the "stacked" triangulations. For D < 6 the subleading behavior is governed by a larger family of graphs, hereafter called cherry trees, which are also dual to the D-dimensional sphere. They can be resummed explicitly through a double scaling limit. In sharp contrast with random matrix models, this double scaling limit is stable. Apart from its unexpected upper critical dimension 6, it displays a singularity at fixed distance from the origin and is clearly the first step in a richer set of yet to be discovered multi-scaling limits.

  12. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  13. Double porosity modeling in elastic wave propagation for reservoir characterization

    SciTech Connect

    Berryman, J. G., LLNL

    1998-06-01

    Phenomenological equations for the poroelastic behavior of a double porosity medium have been formulated and the coefficients in these linear equations identified. The generalization from a single porosity model increases the number of independent coefficients from three to six for an isotropic applied stress. In a quasistatic analysis, the physical interpretations are based upon considerations of extremes in both spatial and temporal scales. The limit of very short times is the one most relevant for wave propagation, and in this case both matrix porosity and fractures behave in an undrained fashion. For the very long times more relevant for reservoir drawdown,the double porosity medium behaves as an equivalent single porosity medium At the macroscopic spatial level, the pertinent parameters (such as the total compressibility) may be determined by appropriate field tests. At the mesoscopic scale pertinent parameters of the rock matrix can be determined directly through laboratory measurements on core, and the compressibility can be measured for a single fracture. We show explicitly how to generalize the quasistatic results to incorporate wave propagation effects and how effects that are usually attributed to squirt flow under partially saturated conditions can be explained alternatively in terms of the double-porosity model. The result is therefore a theory that generalizes, but is completely consistent with, Biot`s theory of poroelasticity and is valid for analysis of elastic wave data from highly fractured reservoirs.

  14. Two potential quark models for double heavy baryons

    NASA Astrophysics Data System (ADS)

    Puchkov, A. M.; Kozhedub, A. V.

    2016-01-01

    Baryons containing two heavy quarks (QQ' q) are treated in the Born-Oppenheimer approximation. Two non-relativistic potential models are proposed, in which the Schrödinger equation admits a separation of variables in prolate and oblate spheroidal coordinates, respectively. In the first model, the potential is equal to the sum of Coulomb potentials of the two heavy quarks, separated from each other by a distance - R and linear potential of confinement. In the second model the center distance parameter R is assumed to be purely imaginary. In this case, the potential is defined by the two-sheeted mapping with singularities being concentrated on a circle rather than at separate points. Thus, in the first model diquark appears as a segment, and in the second - as a circle. In this paper we calculate the mass spectrum of double heavy baryons in both models, and compare it with previous results.

  15. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  16. Number-Theoretic Functions via Convolution Rings.

    ERIC Educational Resources Information Center

    Berberian, S. K.

    1992-01-01

    Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)

  17. Continuous speech recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  18. About closedness by convolution of the Tsallis maximizers

    NASA Astrophysics Data System (ADS)

    Vignat, C.; Hero, A. O., III; Costa, J. A.

    2004-09-01

    In this paper, we study the stability under convolution of the maximizing distributions of the Tsallis entropy under energy constraint (called hereafter Tsallis distributions). These distributions are shown to obey three important properties: a stochastic representation property, an orthogonal invariance property and a duality property. As a consequence of these properties, the behavior of Tsallis distributions under convolution is characterized. At last, a special random convolution, called Kingman convolution, is shown to ensure the stability of Tsallis distributions.

  19. Experience in calibrating the double-hardening constitutive model Monot

    NASA Astrophysics Data System (ADS)

    Hicks, M. A.

    2003-11-01

    The Monot double-hardening soil model has previously been implemented within a general purpose finite element algorithm, and used in the analysis of numerous practical problems. This paper reviews experience gained in calibrating Monot to laboratory data and demonstrates how the calibration process may be simplified without detriment to the range of behaviours modelled. It describes Monot's principal features, important governing equations and various calibration methods, including strategies for overconsolidated, cemented and cohesive soils. Based on a critical review of over 30 previous Monot calibrations, for sands and other geomaterials, trends in parameter values have been identified, enabling parameters to be categorized according to their relative importance. It is shown that, for most practical purposes, a maximum of only 5 parameters is needed; for the remaining parameters, standard default values are suggested. Hence, the advanced stress-strain modelling offered by Monot is attainable with a similar number of parameters as would be needed for some simpler, less versatile, models. Copyright

  20. Investigating GPDs in the framework of the double distribution model

    NASA Astrophysics Data System (ADS)

    Nazari, F.; Mirjalili, A.

    2016-06-01

    In this paper, we construct the generalized parton distribution (GPD) in terms of the kinematical variables x, ξ, t, using the double distribution model. By employing these functions, we could extract some quantities which makes it possible to gain a three-dimensional insight into the nucleon structure function at the parton level. The main objective of GPDs is to combine and generalize the concepts of ordinary parton distributions and form factors. They also provide an exclusive framework to describe the nucleons in terms of quarks and gluons. Here, we first calculate, in the Double Distribution model, the GPD based on the usual parton distributions arising from the GRV and CTEQ phenomenological models. Obtaining quarks and gluons angular momenta from the GPD, we would be able to calculate the scattering observables which are related to spin asymmetries of the produced quarkonium. These quantities are represented by AN and ALS. We also calculate the Pauli and Dirac form factors in deeply virtual Compton scattering. Finally, in order to compare our results with the existing experimental data, we use the difference of the polarized cross-section for an initial longitudinal leptonic beam and unpolarized target particles (ΔσLU). In all cases, our obtained results are in good agreement with the available experimental data.

  1. Three-Triplet Model with Double SU(3) Symmetry

    DOE R&D Accomplishments Database

    Han, M. Y.; Nambu, Y.

    1965-01-01

    With a view to avoiding some of the kinematical and dynamical difficulties involved in the single triplet quark model, a model for the low lying baryons and mesons based on three triplets with integral charges is proposed, somewhat similar to the two-triplet model introduced earlier by one of us (Y. N.). It is shown that in a U(3) scheme of triplets with integral charges, one is naturally led to three triplets located symmetrically about the origin of I{sub 3} - Y diagram under the constraint that Nishijima-Gell-Mann relation remains intact. A double SU(3) symmetry scheme is proposed in which the large mass splittings between different representations are ascribed to one of the SU(3), while the other SU(3) is the usual one for the mass splittings within a representation of the first SU(3).

  2. Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi

    2016-07-01

    Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.

  3. Is turbulent mixing a self-convolution process?

    PubMed

    Venaille, Antoine; Sommeria, Joel

    2008-06-13

    Experimental results for the evolution of the probability distribution function (PDF) of a scalar mixed by a turbulent flow in a channel are presented. The sequence of PDF from an initial skewed distribution to a sharp Gaussian is found to be nonuniversal. The route toward homogeneization depends on the ratio between the cross sections of the dye injector and the channel. In connection with this observation, advantages, shortcomings, and applicability of models for the PDF evolution based on a self-convolution mechanism are discussed. PMID:18643510

  4. Double-multiple streamtube model for Darrieus in turbines

    NASA Technical Reports Server (NTRS)

    Paraschivoiu, I.

    1981-01-01

    An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.

  5. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  6. A convolutional neural network neutrino event classifier

    DOE PAGESBeta

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  7. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  8. Satellite image classification using convolutional learning

    NASA Astrophysics Data System (ADS)

    Nguyen, Thao; Han, Jiho; Park, Dong-Chul

    2013-10-01

    A satellite image classification method using Convolutional Neural Network (CNN) architecture is proposed in this paper. As a special case of deep learning, CNN classifies classes of images without any feature extraction step while other existing classification methods utilize rather complex feature extraction processes. Experiments on a set of satellite image data and the preliminary results show that the proposed classification method can be a promising alternative over existing feature extraction-based schemes in terms of classification accuracy and classification speed.

  9. Blind Identification of Convolutional Encoder Parameters

    PubMed Central

    Su, Shaojing; Zhou, Jing; Huang, Zhiping; Liu, Chunwu; Zhang, Yimeng

    2014-01-01

    This paper gives a solution to the blind parameter identification of a convolutional encoder. The problem can be addressed in the context of the noncooperative communications or adaptive coding and modulations (ACM) for cognitive radio networks. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary convolutional codes, while the coding parameters are unknown. Some previous literatures have significant contributions for the recognition of convolutional encoder parameters in hard-decision situations. However, soft-decision systems are applied more and more as the improvement of signal processing techniques. In this paper we propose a method to utilize the soft information to improve the recognition performances in soft-decision communication systems. Besides, we propose a new recognition method based on correlation attack to meet low signal-to-noise ratio situations. Finally we give the simulation results to show the efficiency of the proposed methods. PMID:24982997

  10. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks. PMID:25439765

  11. A hybrid double-observer sightability model for aerial surveys

    USGS Publications Warehouse

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  12. Classification of Histology Sections via Multispectral Convolutional Sparse Coding*

    PubMed Central

    Zhou, Yin; Barner, Kenneth; Spellman, Paul

    2014-01-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]). PMID:25554749

  13. Multiple deep convolutional neural networks averaging for face alignment

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.

  14. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    PubMed Central

    2012-01-01

    Background Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical reconstructions using sinc-convolution

  15. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  16. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  17. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  18. Convolution neural networks for ship type recognition

    NASA Astrophysics Data System (ADS)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  19. ``Quasi-complete'' mechanical model for a double torsion pendulum

    NASA Astrophysics Data System (ADS)

    De Marchi, Fabrizio; Pucacco, Giuseppe; Bassan, Massimo; De Rosa, Rosario; Di Fiore, Luciano; Garufi, Fabio; Grado, Aniello; Marconi, Lorenzo; Stanga, Ruggero; Stolzi, Francesco; Visco, Massimo

    2013-06-01

    We present a dynamical model for the double torsion pendulum nicknamed “PETER,” where one torsion pendulum hangs in cascade, but off axis, from the other. The dynamics of interest in these devices lies around the torsional resonance, that is at very low frequencies (mHz). However, we find that, in order to properly describe the forced motion of the pendulums, also other modes must be considered, namely swinging and bouncing oscillations of the two suspended masses, that resonate at higher frequencies (Hz). Although the system has obviously 6+6 degrees of freedom, we find that 8 are sufficient for an accurate description of the observed motion. This model produces reliable estimates of the response to generic external disturbances and actuating forces or torques. In particular, we compute the effect of seismic floor motion (“tilt” noise) on the low frequency part of the signal spectra and show that it properly accounts for most of the measured low frequency noise.

  20. Geometric multi-resolution analysis and data-driven convolutions

    NASA Astrophysics Data System (ADS)

    Strawn, Nate

    2015-09-01

    We introduce a procedure for learning discrete convolutional operators for generic datasets which recovers the standard block convolutional operators when applied to sets of natural images. They key observation is that the standard block convolutional operators on images are intuitive because humans naturally understand the grid structure of the self-evident functions over images spaces (pixels). This procedure first constructs a Geometric Multi-Resolution Analysis (GMRA) on the set of variables giving rise to a dataset, and then leverages the details of this data structure to identify subsets of variables upon which convolutional operators are supported, as well as a space of functions that can be shared coherently amongst these supports.

  1. Shell model nuclear matrix elements for competing mechanisms contributing to double beta decay

    SciTech Connect

    Horoi, Mihai

    2013-12-30

    Recent progress in the shell model approach to the nuclear matrix elements for the double beta decay process are presented. This includes nuclear matrix elements for competing mechanisms to neutrionless double beta decay, a comparison between closure and non-closure approximation for {sup 48}Ca, and an updated shell model analysis of nuclear matrix elements for the double beta decay of {sup 136}Xe.

  2. Convolutional fountain distribution over fading wireless channels

    NASA Astrophysics Data System (ADS)

    Usman, Mohammed

    2012-08-01

    Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.

  3. Convolution Inequalities for the Boltzmann Collision Operator

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.

    2010-09-01

    We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.

  4. Convolution formulations for non-negative intensity.

    PubMed

    Williams, Earl G

    2013-08-01

    Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space. PMID:23927105

  5. Applying the Post-Modern Double ABC-X Model to Family Food Insecurity

    ERIC Educational Resources Information Center

    Hutson, Samantha; Anderson, Melinda; Swafford, Melinda

    2015-01-01

    This paper develops the argument that using the Double ABC-X model in family and consumer sciences (FCS) curricula is a way to educate nutrition and dietetics students regarding a family's perceptions of food insecurity. The Double ABC-X model incorporates ecological theory as a basis to explain family stress and the resulting adjustment and…

  6. Model analysis of a double-stage Hall effect thruster with double-peaked magnetic field and intermediate electrode

    SciTech Connect

    Perez-Luna, J.; Hagelaar, G. J. M.; Garrigues, L.; Boeuf, J. P.

    2007-11-15

    A hybrid fluid-particle model has been used to study the properties of a double-stage Hall effect thruster where the channel is divided into two regions of large magnetic field separated by a low-field region containing an intermediate, electron-emitting electrode. These two features are aimed at effectively separating the ionization region from the acceleration region in order to extend the thruster operating range. Simulation results are compared with experimental results obtained elsewhere. The simulations reproduce some of the measurements when the anomalous transport coefficients are adequately chosen. However, they raise the question of a complete separation of the ionization and acceleration regions and the necessity of an electron-emissive intermediate electrode. The calculation method for the electric potential in the hybrid model has been improved with respect to our previous work and is capable of a complete two-dimensional description of the magnetic configurations of double-stage Hall effect thrusters.

  7. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  8. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  9. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  10. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-08-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications. PMID:26738016

  11. Improved double-multiple streamtube model for the Darrieus-type vertical-axis wind turbine

    SciTech Connect

    Berg, D.E.

    1983-01-01

    Double streamtube codes model the curved blade (Darrieus-type) vertical-axis wind turbine (VAWT) as a double actuator-disk arrangement (one disk for the upwind half of the rotor and a second disk for the downwind half) and use conservation of momentum principles to determine the forces acting on the turbine blades and the turbine performance. These models differentiate between the upwind and downwind sections of the rotor and are capable of determining blade loading more accurately than the widely-used single-actuator-disk streamtube models. Additional accuracy may be obtained by representing the turbine as a collection of several streamtubes, each of which is modeled as a double actuator disk. This is referred to as the double-multiple-streamtube model. Sandia National Laboratories has developed a double-multiple streamtube model for the VAWT which incorporates the effects of the incident wind boundary layer, nonuniform velocity between the upwind and downwind sections of the rotor, dynamic stall effects and local blade Reynolds number variations. This paper presents the theory underlying this VAWT model and describes the code capabilities. Code results are compared with experimental data from two VAWT's and with the results from another double-multiple-streamtube and a vortex-filament code. The effects of neglecting dynamic stall and horizontal wind-velocity distribution are also illustrated.

  12. A 3D Model of Double-Helical DNA Showing Variable Chemical Details

    ERIC Educational Resources Information Center

    Cady, Susan G.

    2005-01-01

    Since the first DNA model was created approximately 50 years ago using molecular models, students and teachers have been building simplified DNA models from various practical materials. A 3D double-helical DNA model, made by placing beads on a wire and stringing beads through holes in plastic canvas, is described. Suggestions are given to enhance…

  13. Accelerated unsteady flow line integral convolution.

    PubMed

    Liu, Zhanping; Moorhead, Robert J

    2005-01-01

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality. PMID:15747635

  14. Blind source separation of convolutive mixtures

    NASA Astrophysics Data System (ADS)

    Makino, Shoji

    2006-04-01

    This paper introduces the blind source separation (BSS) of convolutive mixtures of acoustic signals, especially speech. A statistical and computational technique, called independent component analysis (ICA), is examined. By achieving nonlinear decorrelation, nonstationary decorrelation, or time-delayed decorrelation, we can find source signals only from observed mixed signals. Particular attention is paid to the physical interpretation of BSS from the acoustical signal processing point of view. Frequency-domain BSS is shown to be equivalent to two sets of frequency domain adaptive microphone arrays, i.e., adaptive beamformers (ABFs). Although BSS can reduce reverberant sounds to some extent in the same way as ABF, it mainly removes the sounds from the jammer direction. This is why BSS has difficulties with long reverberation in the real world. If sources are not "independent," the dependence results in bias noise when obtaining the correct separation filter coefficients. Therefore, the performance of BSS is limited by that of ABF. Although BSS is upper bounded by ABF, BSS has a strong advantage over ABF. BSS can be regarded as an intelligent version of ABF in the sense that it can adapt without any information on the array manifold or the target direction, and sources can be simultaneously active in BSS.

  15. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  16. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  17. Effects of Convoluted Divergent Flap Contouring on the Performance of a Fixed-Geometry Nonaxisymmetric Exhaust Nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.; Hunter, Craig A.

    1999-01-01

    An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.

  18. Computational modeling of electrophotonics nanomaterials: Tunneling in double quantum dots

    SciTech Connect

    Vlahovic, Branislav Filikhin, Igor

    2014-10-06

    Single electron localization and tunneling in double quantum dots (DQD) and rings (DQR) and in particular the localized-delocalized states and their spectral distributions are considered in dependence on the geometry of the DQDs (DQRs). The effect of violation of symmetry of DQDs geometry on the tunneling is studied in details. The cases of regular and chaotic geometries are considered. It will be shown that a small violation of symmetry drastically affects localization of electron and that anti-crossing of the levels is the mechanism of tunneling between the localized and delocalized states in DQRs.

  19. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. PMID:26700535

  20. Double-expansion impurity solver for multiorbital models with dynamically screened U and J

    NASA Astrophysics Data System (ADS)

    Steiner, Karim; Nomura, Yusuke; Werner, Philipp

    2015-09-01

    We present a continuous-time Monte Carlo impurity solver for multiorbital impurity models which combines a strong-coupling hybridization expansion and a weak-coupling expansion in the Hund's coupling parameter J . This double-expansion approach allows to treat the dominant density-density interactions U within the efficient segment representation. We test the approach for a two-orbital model with static interactions, and then explain how the double expansion allows to simulate models with frequency dependent U (ω ) and J (ω ) . The method is used to investigate spin-state transitions in a toy model for fullerides, with repulsive bare J but attractive screened J .

  1. Strong coupling theory for electron-mediated interactions in double-exchange models

    NASA Astrophysics Data System (ADS)

    Ishizuka, Hiroaki; Motome, Yukitoshi

    2015-07-01

    We present a theoretical framework for evaluating effective interactions between localized spins mediated by itinerant electrons in double-exchange models. Performing the expansion with respect to the spin-dependent part of the electron hopping terms, we show a systematic way of constructing the effective spin model in the large Hund's coupling limit. As a benchmark, we examine the accuracy of this method by comparing the results with the numerical solutions for the spin-ice type model on a pyrochlore lattice. We also discuss an extension of the method to the double-exchange models with Heisenberg and X Y localized spins.

  2. Coupled cluster Green function: Model involving single and double excitations

    NASA Astrophysics Data System (ADS)

    Bhaskaran-Nair, Kiran; Kowalski, Karol; Shelton, William A.

    2016-04-01

    In this paper, we report on the development of a parallel implementation of the coupled-cluster (CC) Green function formulation (GFCC) employing single and double excitations in the cluster operator (GFCCSD). A key aspect of this work is the determination of the frequency dependent self-energy, Σ(ω). The detailed description of the underlying algorithm is provided, including approximations used that preserve the pole structure of the full GFCCSD method, thereby reducing the computational costs while maintaining an accurate character of methodology. Furthermore, for systems with strong local correlation, our formulation reveals a diagonally dominate block structure where as the non-local correlation increases, the block size increases proportionally. To demonstrate the accuracy of our approach, several examples including calculations of ionization potentials for benchmark systems are presented and compared against experiment.

  3. Modeling and simulation of a double auction artificial financial market

    NASA Astrophysics Data System (ADS)

    Raberto, Marco; Cincotti, Silvano

    2005-09-01

    We present a double-auction artificial financial market populated by heterogeneous agents who trade one risky asset in exchange for cash. Agents issue random orders subject to budget constraints. The limit prices of orders may depend on past market volatility. Limit orders are stored in the book whereas market orders give immediate birth to transactions. We show that fat tails and volatility clustering are recovered by means of very simple assumptions. We also investigate two important stylized facts of the limit order book, i.e., the distribution of waiting times between two consecutive transactions and the instantaneous price impact function. We show both theoretically and through simulations that if the order waiting times are exponentially distributed, even trading waiting times are also exponentially distributed.

  4. Lifetime of double occupancies in the Fermi-Hubbard model

    SciTech Connect

    Sensarma, Rajdeep; Pekker, David; Demler, Eugene; Altman, Ehud; Strohmaier, Niels; Moritz, Henning; Greif, Daniel; Joerdens, Robert; Tarruell, Leticia; Esslinger, Tilman

    2010-12-01

    We investigate the decay of artificially created double occupancies in a repulsive Fermi-Hubbard system in the strongly interacting limit using diagrammatic many-body theory and experiments with ultracold fermions in optical lattices. The lifetime of the doublons is found to scale exponentially with the ratio of the on-site repulsion to the bandwidth. We show that the dominant decay process in presence of background holes is the excitation of a large number of particle-hole pairs to absorb the energy of the doublon. We also show that the strongly interacting nature of the background state is crucial in obtaining the correct estimate of the doublon lifetime in these systems. The theoretical estimates and the experimental data are in agreement.

  5. Coupled cluster Green function: Model involving single and double excitations.

    PubMed

    Bhaskaran-Nair, Kiran; Kowalski, Karol; Shelton, William A

    2016-04-14

    In this paper, we report on the development of a parallel implementation of the coupled-cluster (CC) Green function formulation (GFCC) employing single and double excitations in the cluster operator (GFCCSD). A key aspect of this work is the determination of the frequency dependent self-energy, Σ(ω). The detailed description of the underlying algorithm is provided, including approximations used that preserve the pole structure of the full GFCCSD method, thereby reducing the computational costs while maintaining an accurate character of methodology. Furthermore, for systems with strong local correlation, our formulation reveals a diagonally dominate block structure where as the non-local correlation increases, the block size increases proportionally. To demonstrate the accuracy of our approach, several examples including calculations of ionization potentials for benchmark systems are presented and compared against experiment. PMID:27083702

  6. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    SciTech Connect

    Gao Yajun

    2008-08-15

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  7. Double resonance surface enhanced Raman scattering substrates: an intuitive coupled oscillator model.

    PubMed

    Chu, Yizhuo; Wang, Dongxing; Zhu, Wenqi; Crozier, Kenneth B

    2011-08-01

    The strong coupling between localized surface plasmons and surface plasmon polaritons in a double resonance surface enhanced Raman scattering (SERS) substrate is described by a classical coupled oscillator model. The effects of the particle density, the particle size and the SiO2 spacer thickness on the coupling strength are experimentally investigated. We demonstrate that by tuning the geometrical parameters of the double resonance substrate, we can readily control the resonance frequencies and tailor the SERS enhancement spectrum. PMID:21934853

  8. Semileptonic decays of double heavy baryons in a relativistic constituent three-quark model

    SciTech Connect

    Faessler, Amand; Gutsche, Thomas; Lyubovitskij, Valery E.; Ivanov, Mikhail A.; Koerner, Juergen G.

    2009-08-01

    We study the semileptonic decays of double-heavy baryons using a manifestly Lorentz covariant constituent three-quark model. We present complete results on transition form factors between double-heavy baryons for finite values of the heavy quark/baryon masses and in the heavy quark symmetry limit, which is valid at and close to zero recoil. Decay rates are calculated and compared to each other in the full theory, keeping masses finite, and also in the heavy quark limit.

  9. A double species model for study of relaxation of impure Ni 3Al grain boundaries

    NASA Astrophysics Data System (ADS)

    Zheng, Li-Ping; Ma, Yu-Gang; Han, Jia-Guang; Li, D. X.; Zhang, Xiu-Rong

    2004-04-01

    Based on the Monte Carlo simulation conjoined with the embedded atom method (EAM) potentials, the double species model is established to study relaxation of impure Ni 3Al grain boundaries. The present double species model suggests that the impure atoms are not only segregating species but also inducing species. The present model also suggests that study of combination of the positive (the impure atoms induce Ni atoms to substitute into Al sites) and the negative (the impure atoms substitute into Ni sites) bulk effects of impure atoms is available, in order to understand dependence of the cohesion of the impure Ni 3Al grain boundary (or the Ni enrichment at the impure Ni 3Al grain boundary) on the bulk concentration of impure atoms. The double species model is clarified in comparison between the Ni 3AlB and the Ni 3AlMg systems.

  10. Quantum model for double ionization of atoms in strong laser fields

    NASA Astrophysics Data System (ADS)

    Prauzner-Bechcicki, Jakub S.; Sacha, Krzysztof; Eckhardt, Bruno; Zakrzewski, Jakub

    2008-07-01

    We discuss double ionization of atoms in strong laser pulses using a reduced dimensionality model. Following the insight obtained from an analysis of the classical mechanics of the process, we confine each electron to move along the lines that point towards the two-particle Stark saddle in the presence of a field. The resulting effective two-dimensional model is similar to the aligned electron model, but it enables correlated escape of electrons with equal momenta, as observed experimentally. The time-dependent solution of the Schrödinger equation allows us to discuss in detail the time dynamics of the ionization process, the formation of electronic wave packets, and the development of the momentum distribution of the outgoing electrons. In particular, we are able to identify the rescattering process, simultaneous direct double ionization during the same field cycle, as well as other double ionization processes. We also use the model to study the phase dependence of the ionization process.

  11. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled

  12. Boundary conditions and the generalized metric formulation of the double sigma model

    NASA Astrophysics Data System (ADS)

    Ma, Chen-Te

    2015-09-01

    Double sigma model with strong constraints is equivalent to the ordinary sigma model by imposing a self-duality relation. The gauge symmetries are the diffeomorphism and one-form gauge transformation with the strong constraints. We consider boundary conditions in the double sigma model from three ways. The first way is to modify the Dirichlet and Neumann boundary conditions with a fully O (D, D) description from double gauge fields. We perform the one-loop β function for the constant background fields to find low-energy effective theory without using the strong constraints. The low-energy theory can also have O (D, D) invariance as the double sigma model. The second way is to construct different boundary conditions from the projectors. The third way is to combine the antisymmetric background field with field strength to redefine an O (D, D) generalized metric. We use this generalized metric to reconstruct a consistent double sigma model with the classical and quantum equivalence.

  13. Simulations of the flow past a cylinder using an unsteady double wake model

    NASA Astrophysics Data System (ADS)

    Ramos-García, N.; Sarlak, H.; Andersen, S. J.; Sørensen, J. N.

    2016-06-01

    In the present work, the in-house UnSteady Double Wake Model (USDWM) is used to simulate flows past a cylinder at subcritical, supercritical, and transcritical Reynolds numbers. The flow model is a two-dimensional panel method which uses the unsteady double wake technique to model flow separation and its dynamics. In the present work the separation location is obtained from experimental data and fixed in time. The highly unsteady flow field behind the cylinder is analyzed in detail, comparing the vortex shedding charactericts under the different flow conditions.

  14. Dynamic modelling of a double-pendulum gantry crane system incorporating payload

    SciTech Connect

    Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.

    2011-06-20

    The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.

  15. Dynamic Modelling of a Double-Pendulum Gantry Crane System Incorporating Payload

    NASA Astrophysics Data System (ADS)

    Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.

    2011-06-01

    The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.

  16. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  17. Flexible algorithm for real-time convolution supporting dynamic event-related fMRI

    NASA Astrophysics Data System (ADS)

    Eaton, Brent L.; Frank, Randall J.; Bolinger, Lizann; Grabowski, Thomas J.

    2002-04-01

    An efficient algorithm for generation of the task reference function has been developed that allows real-time statistical analysis of fMRI data, within the framework of the general linear model, for experiments with event-related stimulus designs. By leveraging time-stamped data collection in the Input/Output time-aWare Architecture (I/OWA), we detect the onset time of a stimulus as it is delivered to a subject. A dynamically updated list of detected stimulus event times is maintained in shared memory as a data stream and delivered as input to a real-time convolution algorithm. As each image is acquired from the MR scanner, the time-stamp of its acquisition is delivered via a second dynamically updated stream to the convolution algorithm, where a running convolution of the events with an estimated hemodynamic response function is computed at the image acquisition time and written to a third stream in memory. Output is interpreted as the activation reference function and treated as the covariate of interest in the I/OWA implementation of the general linear model. Statistical parametric maps are computed and displayed to the I/OWA user interface in less than the time between successive image acquisitions.

  18. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    NASA Astrophysics Data System (ADS)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  19. Double time lag combustion instability model for bipropellant rocket engines

    NASA Technical Reports Server (NTRS)

    Liu, C. K.

    1973-01-01

    A bipropellant stability model is presented in which feed system inertance and capacitance are treated along with injection pressure drop and distinctly different propellant time lags. The model is essentially an extension of Crocco's and Cheng's monopropellant model to the bipropellant case assuming that the feed system inertance and capacitance along with the resistance are located at the injector. The neutral stability boundaries are computed in terms of these parameters to demonstrate the interaction among them.

  20. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    NASA Astrophysics Data System (ADS)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  1. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  2. Innervation of the renal proximal convoluted tubule of the rat

    SciTech Connect

    Barajas, L.; Powers, K. )

    1989-12-01

    Experimental data suggest the proximal tubule as a major site of neurogenic influence on tubular function. The functional and anatomical axial heterogeneity of the proximal tubule prompted this study of the distribution of innervation sites along the early, mid, and late proximal convoluted tubule (PCT) of the rat. Serial section autoradiograms, with tritiated norepinephrine serving as a marker for monoaminergic nerves, were used in this study. Freehand clay models and graphic reconstructions of proximal tubules permitted a rough estimation of the location of the innervation sites along the PCT. In the subcapsular nephrons, the early PCT (first third) was devoid of innervation sites with most of the innervation occurring in the mid (middle third) and in the late (last third) PCT. Innervation sites were found in the early PCT in nephrons located deeper in the cortex. In juxtamedullary nephrons, innervation sites could be observed on the PCT as it left the glomerulus. This gradient of PCT innervation can be explained by the different tubulovascular relationships of nephrons at different levels of the cortex. The absence of innervation sites in the early PCT of subcapsular nephrons suggests that any influence of the renal nerves on the early PCT might be due to an effect of neurotransmitter released from renal nerves reaching the early PCT via the interstitium and/or capillaries.

  3. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.

    PubMed

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681

  4. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    PubMed Central

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681

  5. A test of the double-shearing model of flow for granular materials

    USGS Publications Warehouse

    Savage, J.C.; Lockner, D.A.

    1997-01-01

    The double-shearing model of flow attributes plastic deformation in granular materials to cooperative slip on conjugate Coulomb shears (surfaces upon which the Coulomb yield condition is satisfied). The strict formulation of the double-shearing model then requires that the slip lines in the material coincide with the Coulomb shears. Three different experiments that approximate simple shear deformation in granular media appear to be inconsistent with this strict formulation. For example, the orientation of the principal stress axes in a layer of sand driven in steady, simple shear was measured subject to the assumption that the Coulomb failure criterion was satisfied on some surfaces (orientation unspecified) within the sand layer. The orientation of the inferred principal compressive axis was then compared with the orientations predicted by the double-shearing model. The strict formulation of the model [Spencer, 1982] predicts that the principal stress axes should rotate in a sense opposite to that inferred from the experiments. A less restrictive formulation of the double-shearing model by de Josselin de Jong [1971] does not completely specify the solution but does prescribe limits on the possible orientations of the principal stress axes. The orientations of the principal compression axis inferred from the experiments are probably within those limits. An elastoplastic formulation of the double-shearing model [de Josselin de Jong, 1988] is reasonably consistent with the experiments, although quantitative agreement was not attained. Thus we conclude that the double-shearing model may be a viable law to describe deformation of granular materials, but the macroscopic slip surfaces will not in general coincide with the Coulomb shears.

  6. A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2010-09-01

    In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  7. Quasi-In vivo Heart Electrocardiogram Measurement of ST Period Using Convolution of Cell Network Extracellular Field Potential Propagation in Lined-Up Cardiomyocyte Cell-Network Circuit

    NASA Astrophysics Data System (ADS)

    Kaneko, Tomoyuki; Nomura, Fumimasa; Yasuda, Kenji

    2011-07-01

    A model for the quasi-in vivo heart electrocardiogram (ECG) measurement of the ST period has been developed. As the part of ECG data at the ST period is the convolution of the extracellular field potentials (FPs) of cardiomyocytes in a ventricle, we have fabricated a lined-up cardiomyocyte cell-network on a lined-up microelectrode array and a circular microelectrode in an agarose microchamber, and measured the convoluted FPs. When the ventricular tachyarrhythmias of beating occurred in the cardiomyocyte network, the convoluted FP profile showed similar arrhythmia ECG-like profiles, indicating the convoluted FPs of the in vitro cell network include both the depolarization data and the propagation manner of beating in the heart.

  8. The long and the short of it: modelling double neutron star and collapsar Galactic dynamics

    NASA Astrophysics Data System (ADS)

    Kiel, Paul D.; Hurley, Jarrod R.; Bailes, Matthew

    2010-07-01

    Understanding the nature of galactic populations of double compact binaries (where both stars are a neutron star or black hole) has been a topic of interest for many years, particularly the coalescence rate of these binaries. The only observed systems thus far are double neutron star systems containing one or more radio pulsars. However, theorists have postulated that short-duration gamma-ray bursts may be evidence of coalescing double neutron star or neutron star-black hole binaries, while long-duration gamma-ray bursts are possibly formed by tidally enhanced rapidly rotating massive stars that collapse to form black holes (collapsars). The work presented here examines populations of double compact binary systems and tidally enhanced collapsars. We make use of BINPOP and BINKIN, two components of a recently developed population synthesis package. Results focus on correlations of both binary and spatial evolutionary population characteristics. Pulsar and long-duration gamma-ray burst observations are used in concert with our models to draw the conclusions that (i) double neutron star binaries can merge rapidly on time-scales of a few million years (much less than that found for the observed double neutron star population), (ii) common-envelope evolution within these models is a very important phase in double neutron star formation and (iii) observations of long gamma-ray burst projected distances are more centrally concentrated than our simulated coalescing double neutron star and collapsar Galactic populations. Better agreement is found with dwarf galaxy models although the outcome is strongly linked to the assumed birth radial distribution. The birth rate of the double neutron star population in our models ranges from 4 to 160 Myr-1 and the merger rate ranges from 3 to 150 Myr-1. The upper and lower limits of the rates result from including electron-capture supernova kicks to neutron stars and decreasing the common-envelope efficiency, respectively. Our double

  9. Haag duality for Kitaev’s quantum double model for abelian groups

    NASA Astrophysics Data System (ADS)

    Fiedler, Leander; Naaijkens, Pieter

    2015-11-01

    We prove Haag duality for cone-like regions in the ground state representation corresponding to the translational invariant ground state of Kitaev’s quantum double model for finite abelian groups. This property says that if an observable commutes with all observables localized outside the cone region, it actually is an element of the von Neumann algebra generated by the local observables inside the cone. This strengthens locality, which says that observables localized in disjoint regions commute. As an application, we consider the superselection structure of the quantum double model for abelian groups on an infinite lattice in the spirit of the Doplicher-Haag-Roberts program in algebraic quantum field theory. We find that, as is the case for the toric code model on an infinite lattice, the superselection structure is given by the category of irreducible representations of the quantum double.

  10. Double and single pion photoproduction within a dynamical coupled-channels model

    DOE PAGESBeta

    Hiroyuki Kamano; Julia-Diaz, Bruno; Lee, T. -S. H.; Matsuyama, Akihiko; Sato, Toru

    2009-12-16

    Within a dynamical coupled-channels model which has already been fixed from analyzing the data of the πN → πN and γN → πN reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W < 1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects due to the direct γN → ππN mechanism, the interplay between the resonant and non-resonant amplitudes, and the coupled-channels effects. As a result, the model parameters which can be determined mostmore » effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.« less

  11. Numerical analysis of the double scaling limit in the string type IIB matrix model.

    PubMed

    Horata, S; Egawa, H S

    2001-05-14

    The bosonic IIB matrix model is studied using a numerical method. This model contains the bosonic part of the IIB matrix model conjectured to be a nonperturbative definition of the type IIB superstring theory. The large N scaling behavior of the model is shown performing a Monte Carlo simulation. The expectation value of the Wilson loop operator is measured and the string tension is estimated. The numerical results show the prescription of the double scaling limit. PMID:11384258

  12. Convolutional neural network approach for buried target recognition in FL-LWIR imagery

    NASA Astrophysics Data System (ADS)

    Stone, K.; Keller, J. M.

    2014-05-01

    A convolutional neural network (CNN) approach to recognition of buried explosive hazards in forward-looking long-wave infrared (FL-LWIR) imagery is presented. The convolutional filters in the first layer of the network are learned in the frequency domain, making enforcement of zero-phase and zero-dc response characteristics much easier. The spatial domain representations of the filters are forced to have unit l2 norm, and penalty terms are added to the online gradient descent update to encourage orthonormality among the convolutional filters, as well smooth first and second order derivatives in the spatial domain. The impact of these modifications on the generalization performance of the CNN model is investigated. The CNN approach is compared to a second recognition algorithm utilizing shearlet and log-gabor decomposition of the image coupled with cell-structured feature extraction and support vector machine classification. Results are presented for multiple FL-LWIR data sets recently collected from US Army test sites. These data sets include vehicle position information allowing accurate transformation between image and world coordinates and realistic evaluation of detection and false alarm rates.

  13. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  14. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  15. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  16. A SPICE model of double-sided Si microstrip detectors

    SciTech Connect

    Candelori, A.; Paccagnella, A. |; Bonin, F.

    1996-12-31

    We have developed a SPICE model for the ohmic side of AC-coupled Si microstrip detectors with interstrip isolation via field plates. The interstrip isolation has been measured in various conditions by varying the field plate voltage. Simulations have been compared with experimental data in order to determine the values of the model parameters for different voltages applied to the field plates. The model is able to predict correctly the frequency dependence of the coupling between adjacent strips. Furthermore, we have used such model for the study of the signal propagation along the detector when a current signal is injected in a strip. Only electrical coupling is considered here, without any contribution due to charge sharing derived from carrier diffusion. For this purpose, the AC pads of the strips have been connected to a read-out electronics and the current signal has been injected into a DC pad. Good agreement between measurements and simulations has been reached for the central strip and the first neighbors. Experimental tests and computer simulations have been performed for four different strip and field plate layouts, in order to investigate how the detector geometry affects the parameters of the SPICE model and the signal propagation.

  17. Testing the Double Corner Source Spectral Model for Long- and Short-Period Ground Motion Simulations

    NASA Astrophysics Data System (ADS)

    Miyake, H.; Koketsu, K.

    2010-12-01

    The omega-squared source model with a single corner frequency is widely used in the earthquake source analyses and ground motion simulations. Recent studies show that the Brune stress drop of subduction-zone earthquakes is almost half of that for crustal earthquakes for a given magnitude. On the other hand, the empirical attenuation relations and spectral analyses of seismic source and ground motions support the fact that subduction-zone earthquakes provide 1-2 times of the short-period source spectral level for crustal earthquakes. To link long- and short-period source characteristics is a crucial issue to perform broadband ground motion simulations. This discrepancy may lead the source modeling with double corner frequencies [e.g., Atkinson, 1993]. We modeled the lower corner frequency corresponding to the size of asperities generating for long-period (> 2-5 s) ground motions by the deterministic approach and the higher corner frequency corresponding to the size of strong motion generation area for short-period ground motions by the semi-empirical approach. We propose that the double corner source spectral model is expressed as a frequency-dependent source model consists of either the asperities in a long-period range or the strong motion generation area in a short-period range and the surrounding background area inside the total rupture area. The characterized source model has been the potential to reproduce fairly well the rupture directivity pulses seen in the observed ground motions. We explore the applicability of the double corner source spectral model to broadband ground motion simulations for the 1978 Mw 7.6 Miyagi-oki and 2003 Mw 8.3 Tokachi-oki earthquakes along the Japan Trench. For both cases, the double corner source spectral model, where the size and stress drop for strong motion generation areas are respectively half and double of those for asperities, worked well to reproduce ground motion time histories and seismic intensity distribution.

  18. Spiral to ferromagnetic transition in a Kondo lattice model with a double-well potential

    NASA Astrophysics Data System (ADS)

    Caro, R. C.; Franco, R.; Silva-Valencia, J.

    2016-02-01

    Using the density matrix renormalization group method, we study a system of 171Yb atoms confined in a one-dimensional optical lattice. The atoms in the 1So state undergo a double-well potential, whereas the atoms in the 3P0 state are localized. This system is modelled by the Kondo lattice model plus a double-well potential for the free carries. We obtain phase diagrams composed of ferromagnetic and spiral phases, where the critical points always increase with the interwell tunneling parameter. We conclude that this quantum phase transition can be tuned by the double-well potential parameters as well as by the common parameters: local coupling and density.

  19. Robustly optimal rate one-half binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1975-01-01

    Three optimality criteria for convolutional codes are considered in this correspondence: namely, free distance, minimum distance, and distance profile. Here we report the results of computer searches for rate one-half binary convolutional codes that are 'robustly optimal' in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria. Comparisons with previously known codes are made. The results of a computer simulation are reported to show the importance of the distance profile to computational performance with sequential decoding.

  20. Two-dimensional models of threshold voltage and subthreshold current for symmetrical double-material double-gate strained Si MOSFETs

    NASA Astrophysics Data System (ADS)

    Yan-hui, Xin; Sheng, Yuan; Ming-tang, Liu; Hong-xia, Liu; He-cai, Yuan

    2016-03-01

    The two-dimensional models for symmetrical double-material double-gate (DM-DG) strained Si (s-Si) metal-oxide semiconductor field effect transistors (MOSFETs) are presented. The surface potential and the surface electric field expressions have been obtained by solving Poisson’s equation. The models of threshold voltage and subthreshold current are obtained based on the surface potential expression. The surface potential and the surface electric field are compared with those of single-material double-gate (SM-DG) MOSFETs. The effects of different device parameters on the threshold voltage and the subthreshold current are demonstrated. The analytical models give deep insight into the device parameters design. The analytical results obtained from the proposed models show good matching with the simulation results using DESSIS. Project supported by the National Natural Science Foundation of China (Grant Nos. 61376099, 11235008, and 61205003).

  1. Period-doubling bifurcation and high-order resonances in RR Lyrae hydrodynamical models

    NASA Astrophysics Data System (ADS)

    Kolláth, Z.; Molnár, L.; Szabó, R.

    2011-06-01

    We investigated period doubling, a well-known phenomenon in dynamical systems, for the first time in RR Lyrae models. These studies provide theoretical background for the recent discovery of period doubling in some Blazhko RR Lyrae stars with the Kepler space telescope. Since period doubling has been observed only in Blazhko-modulated stars so far, the phenomenon can help in understanding the modulation as well. Utilizing the Florida-Budapest turbulent convective hydrodynamical code, we have identified the phenomenon in both radiative and convective models. A period-doubling cascade was also followed up to an eight-period solution, confirming that destabilization of the limit cycle is indeed the underlying phenomenon. Floquet stability roots were calculated to investigate the possible causes and occurrences of the phenomenon. A two-dimensional diagnostic diagram was constructed to illustrate the various resonances between the fundamental mode and the different overtones. Combining the two tools, we confirmed that the period-doubling instability is caused by a 9:2 resonance between the ninth overtone and the fundamental mode. Destabilization of the limit cycle by a resonance of a high-order mode is possible because the overtone is a strange mode. The resonance is found to be strong enough to shift the period of overtone by up to 10 per cent. Our investigations suggest that a more complex interplay of radial (and presumably non-radial) modes could happen in RR Lyrae stars that might have connections with the Blazhko effect as well.

  2. Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel

    NASA Technical Reports Server (NTRS)

    Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.

    1989-01-01

    A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.

  3. Family Stress and Adaptation to Crises: A Double ABCX Model of Family Behavior.

    ERIC Educational Resources Information Center

    McCubbin, Hamilton I.; Patterson, Joan M.

    Recent developments in family stress and coping research and a review of data and observations of families in a war-induced crisis situation led to an investigation of the relationship between a stressor and family outcomes. The study, based on the Double ABCX Model in which A (the stressor event) interacts with B (the family's crisis-meeting…

  4. Creating a Double-Spring Model to Teach Chromosome Movement during Mitosis & Meiosis

    ERIC Educational Resources Information Center

    Luo, Peigao

    2012-01-01

    The comprehension of chromosome movement during mitosis and meiosis is essential for understanding genetic transmission, but students often find this process difficult to grasp in a classroom setting. I propose a "double-spring model" that incorporates a physical demonstration and can be used as a teaching tool to help students understand this…

  5. Double Higgs production in the Two Higgs Doublet Model at the linear collider

    SciTech Connect

    Arhrib, Abdesslam; Benbrik, Rachid; Chiang, C.-W.

    2008-04-21

    We study double Higgs-strahlung production at the future Linear Collider in the framework of the Two Higgs Doublet Models through the following channels: e{sup +}e{sup -}{yields}{phi}{sub i}{phi}{sub j}Z, {phi}{sub i} = h deg., H deg., A deg. All these processes are sensitive to triple Higgs couplings. Hence observations of them provide information on the triple Higgs couplings that help reconstructing the scalar potential. We discuss also the double Higgs-strahlung e{sup +}e{sup -}{yields}h deg. h deg. Z in the decoupling limit where h deg. mimics the SM Higgs boson.

  6. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  7. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  8. SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters.

    PubMed

    Hui, Kerwin; Chai, Jeng-Da

    2016-01-28

    By incorporating the nonempirical strongly constrained and appropriately normed (SCAN) semilocal density functional [J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction problems and noncovalent interactions. In particular, SCAN0-2, which includes about 79% of Hartree-Fock exchange and 50% of second-order Møller-Plesset correlation, is shown to be reliably accurate for a very diverse range of applications, such as thermochemistry, kinetics, noncovalent interactions, and self-interaction problems. PMID:26827209

  9. Modelling and control of double-cone dielectric elastomer actuator

    NASA Astrophysics Data System (ADS)

    Branz, F.; Francesconi, A.

    2016-09-01

    Among various dielectric elastomer devices, cone actuators are of large interest for their multi-degree-of-freedom design. These objects combine the common advantages of dielectric elastomers (i.e. solid-state actuation, self-sensing capability, high conversion efficiency, light weight and low cost) with the possibility to actuate more than one degree of freedom in a single device. The potential applications of this feature in robotics are huge, making cone actuators very attractive. This work focuses on rotational degrees of freedom to complete existing literature and improve the understanding of such aspect. Simple tools are presented for the performance prediction of the device: finite element method simulations and interpolating relations have been used to assess the actuator steady-state behaviour in terms of torque and rotation as a function of geometric parameters. Results are interpolated by fit relations accounting for all the relevant parameters. The obtained data are validated through comparison with experimental results: steady-state torque and rotation are determined at a given high voltage actuation. In addition, the transient response to step input has been measured and, as a result, the voltage-to-torque and the voltage-to-rotation transfer functions are obtained. Experimental data are collected and used to validate the prediction capability of the transfer function in terms of time response to step input and frequency response. The developed static and dynamic models have been employed to implement a feedback compensator that controls the device motion; the simulated behaviour is compared to experimental data, resulting in a maximum prediction error of 7.5%.

  10. A diabatic state model for double proton transfer in hydrogen bonded complexes

    SciTech Connect

    McKenzie, Ross H.

    2014-09-14

    Four diabatic states are used to construct a simple model for double proton transfer in hydrogen bonded complexes. Key parameters in the model are the proton donor-acceptor separation R and the ratio, D{sub 1}/D{sub 2}, between the proton affinity of a donor with one and two protons. Depending on the values of these two parameters the model describes four qualitatively different ground state potential energy surfaces, having zero, one, two, or four saddle points. Only for the latter are there four stable tautomers. In the limit D{sub 2} = D{sub 1} the model reduces to two decoupled hydrogen bonds. As R decreases a transition can occur from a synchronous concerted to an asynchronous concerted to a sequential mechanism for double proton transfer.

  11. A model of phase transitions in double-well Morse potential: Application to hydrogen bond

    NASA Astrophysics Data System (ADS)

    Goryainov, S. V.

    2012-11-01

    A model of phase transitions in double-well Morse potential is developed. Application of this model to the hydrogen bond is based on ab initio electron density calculations, which proved that the predominant contribution to the hydrogen bond energy originates from the interaction of proton with the electron shells of hydrogen-bonded atoms. This model uses a double-well Morse potential for proton. Analytical expressions for the hydrogen bond energy and the frequency of O-H stretching vibrations were obtained. Experimental data on the dependence of O-H vibration frequency on the bond length were successfully fitted with model-predicted dependences in classical and quantum mechanics approaches. Unlike empirical exponential function often used previously for dependence of O-H vibration frequency on the hydrogen bond length (Libowitzky, Mon. Chem., 1999, vol.130, 1047), the dependence reported here is theoretically substantiated.

  12. Neutrinoless double beta decay in type I+II seesaw models

    NASA Astrophysics Data System (ADS)

    Borah, Debasish; Dasgupta, Arnab

    2015-11-01

    We study neutrinoless double beta decay in left-right symmetric extension of the standard model with type I and type II seesaw origin of neutrino masses. Due to the enhanced gauge symmetry as well as extended scalar sector, there are several new physics sources of neutrinoless double beta decay in this model. Ignoring the left-right gauge boson mixing and heavy-light neutrino mixing, we first compute the contributions to neutrinoless double beta decay for type I and type II dominant seesaw separately and compare with the standard light neutrino contributions. We then repeat the exercise by considering the presence of both type I and type II seesaw, having non-negligible contributions to light neutrino masses and show the difference in results from individual seesaw cases. Assuming the new gauge bosons and scalars to be around a TeV, we constrain different parameters of the model including both heavy and light neutrino masses from the requirement of keeping the new physics contribution to neutrinoless double beta decay amplitude below the upper limit set by the GERDA experiment and also satisfying bounds from lepton flavor violation, cosmology and colliders.

  13. Double Higgs production at LHC, see-saw type-II and Georgi-Machacek model

    SciTech Connect

    Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.

    2015-03-15

    The double Higgs production in the models with isospin-triplet scalars is studied. It is shown that in the see-saw type-II model, the mode with an intermediate heavy scalar, pp → H + X → 2h + X, may have the cross section that is comparable with that in the Standard Model. In the Georgi-Machacek model, this cross section could be much larger than in the Standard Model because the vacuum expectation value of the triplet can be large.

  14. Parallel double-plate capacitive proximity sensor modelling based on effective theory

    SciTech Connect

    Li, Nan Zhu, Haiye; Wang, Wenyu; Gong, Yu

    2014-02-15

    A semi-analytical model for a double-plate capacitive proximity sensor is presented according to the effective theory. Three physical models are established to derive the final equation of the sensor. Measured data are used to determine the coefficients. The final equation is verified by using measured data. The average relative error of the calculated and the measured sensor capacitance is less than 7.5%. The equation can be used to provide guidance to engineering design of the proximity sensors.

  15. Explicit drain current model of junctionless double-gate field-effect transistors

    NASA Astrophysics Data System (ADS)

    Yesayan, Ashkhen; Prégaldiny, Fabien; Sallese, Jean-Michel

    2013-11-01

    This paper presents an explicit drain current model for the junctionless double-gate metal-oxide-semiconductor field-effect transistor. Analytical relationships for the channel charge densities and for the drain current are derived as explicit functions of applied terminal voltages and structural parameters. The model is validated with 2D numerical simulations for a large range of channel thicknesses and is found to be very accurate for doping densities exceeding 1018 cm-3, which are actually used for such devices.

  16. Double-multiple streamtube model for studying vertical-axis wind turbines

    SciTech Connect

    Paraschivoiu, I.

    1988-08-01

    This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor. 32 references.

  17. Double-multiple streamtube model for studying vertical-axis wind turbines

    NASA Astrophysics Data System (ADS)

    Paraschivoiu, Ion

    1988-08-01

    This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.

  18. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results. PMID:27093723

  19. Text-Attentional Convolutional Neural Network for Scene Text Detection

    NASA Astrophysics Data System (ADS)

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  20. Evaluation of convolutional neural networks for visual recognition.

    PubMed

    Nebauer, C

    1998-01-01

    Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed. PMID:18252491

  1. Modeling sorption of divalent metal cations on hydrous manganese oxide using the diffuse double layer model

    USGS Publications Warehouse

    Tonkin, J.W.; Balistrieri, L.S.; Murray, J.W.

    2004-01-01

    Manganese oxides are important scavengers of trace metals and other contaminants in the environment. The inclusion of Mn oxides in predictive models, however, has been difficult due to the lack of a comprehensive set of sorption reactions consistent with a given surface complexation model (SCM), and the discrepancies between published sorption data and predictions using the available models. The authors have compiled a set of surface complexation reactions for synthetic hydrous Mn oxide (HMO) using a two surface site model and the diffuse double layer SCM which complements databases developed for hydrous Fe (III) oxide, goethite and crystalline Al oxide. This compilation encompasses a range of data observed in the literature for the complex HMO surface and provides an error envelope for predictions not well defined by fitting parameters for single or limited data sets. Data describing surface characteristics and cation sorption were compiled from the literature for the synthetic HMO phases birnessite, vernadite and ??-MnO2. A specific surface area of 746 m2g-1 and a surface site density of 2.1 mmol g-1 were determined from crystallographic data and considered fixed parameters in the model. Potentiometric titration data sets were adjusted to a pH1EP value of 2.2. Two site types (???XOH and ???YOH) were used. The fraction of total sites attributed to ???XOH (??) and pKa2 were optimized for each of 7 published potentiometric titration data sets using the computer program FITEQL3.2. pKa2 values of 2.35??0.077 (???XOH) and 6.06??0.040 (???YOH) were determined at the 95% confidence level. The calculated average ?? value was 0.64, with high and low values ranging from 1.0 to 0.24, respectively. pKa2 and ?? values and published cation sorption data were used subsequently to determine equilibrium surface complexation constants for Ba2+, Ca2+, Cd 2+, Co2+, Cu2+, Mg2+, Mn 2+, Ni2+, Pb2+, Sr2+ and Zn 2+. In addition, average model parameters were used to predict additional

  2. A simulation study of the performance of the NASA (2,1,6) convolutional code on RFI/burst channels

    NASA Technical Reports Server (NTRS)

    Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In an earlier report, the LINKABIT Corporation studied the performance of the (2,1,6) convolutional code on the radio frequency interference (RFI)/burst channel using analytical methods. Using an R(sub 0) analysis, the report concluded that channel interleaving was essential to achieving reliable performance. In this report, Monte Carlo simulation techniques are used to study the performance of the convolutional code on the RFI/burst channel in more depth. The basic system model under consideration is shown. The convolutional code is the NASA standard code with generators g(exp 1) = 1+D(exp 2)+D(exp 3)+D(exp 5)+D(exp 6) and g(exp 2) = 1+D+D(exp 2)+D(exp 3)+D(exp 6) and d(sub free) = 10. The channel interleaver is of the convolutional or periodic type. The binary output of the channel interleaver is transmitted across the channel using binary phase shift keying (BPSK) modulation. The transmitted symbols are corrupted by an RFI/burst channel consisting of a combination of additive white Gaussian noise (AWGN) and RFI pulses. At the receiver, a soft-decision Viterbi decoder with no quantization and variable truncation length is used to decode the deinterleaved sequence.

  3. The Dynamics of a Double-Layer Along an Auroral Field Line: An Improved Model

    NASA Astrophysics Data System (ADS)

    Barakat, A. R.

    2004-12-01

    The auroral field lines represent an important channel through which the ionosphere and the magnetosphere exchange mass, momentum, and energy. When the cold, dense ionospheric plasma interacts with sufficiently warm magnetospheric plasma along the field lines (with upward currents), double layers form with large parallel potential drops. The potential drops accelerate ionospheric ions, which in turn cause ion-beam-driven instabilities. The resulting wave-particle interactions (WPI) further heat the plasma, and hence, influence the behavior of the double layer. Understanding the coupling between these microscale and macroscale processes is crucial in quantifying the ionosphere-magnetosphere (I-M) coupling. Previous theoretical studies addressed the different facets of the problem separately. We developed a particle-in-cell (PIC) model that simulate the behavior of the double layer along auroral field lines, with special emphasis on the effect of the current along filed lines. Moreover, our model includes the effects of ionospheric collision processes, gravity, magnetic mirror force, electrostatic fields, as well as wave instabilities, propagation, and wave-particle interactions. The resulting self-consistent electrodynamics of the plasma in an auroral flux tube with an upward current is presented with emphasis on the formation and evolution of the double layer. In particular, we address questions such as: (1) what is the I-V relationship along the auroral field line, and (2) how the potential drop is distributed along the filed lines. These, and other results, are presented.

  4. The Dynamics of a Double-Layer Along an Auroral Field Line: A Unified Model

    NASA Astrophysics Data System (ADS)

    Barakat, A.; Singh, N.

    The auroral field lines represent an important channel through which the ionosphere and the magnetosphere exchange mass, momentum, and energy. When the cold, dense ionospheric plasma interacts with sufficiently warm magnetospheric plasma along the field lines (with upward currents), double layers form with large parallel potential drops. The potential drops accelerate ionospheric ions, which in turn cause ion-beam-driven instabilities. The resulting wave-particle interactions (WPI) further heat the plasma, and hence, influence the behavior of the double layer. Understanding the coupling between these microscale and macroscale processes is crucial in quantifying the ionosphere-magnetosphere (I-M) coupling. Previous theoretical studies addressed the different facets of the problem separately. They predicted, in agreement with observations, the formation of the double layer, ion beams, and ion heating due to WPI. We developed a comprehensive model for this problem that is based on a macroscopic PIC approach. Our model properly accounts for the transport phenomena, as well as the small-scale waves. For example, it includes the effects of ionospheric collision processes, gravity, magnetic mirror force, electrostatic fields, as well as wave instabilities, propagation, and wave-particle interactions. The resulting self-consistent electrodynamics of the plasma in an auroral flux tube with an upward current is presented with emphasis on the formation and evolution of the double layer.

  5. Theoretical modeling of the dynamics of a semiconductor laser subject to double-reflector optical feedback

    NASA Astrophysics Data System (ADS)

    Bakry, A.; Abdulrhmann, S.; Ahmed, M.

    2016-06-01

    We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.

  6. Toward a nonlinearity model for a heterodyne interferometer: not based on double-frequency mixing.

    PubMed

    Hu, Pengcheng; Bai, Yang; Zhao, Jinlong; Wu, Guolong; Tan, Jiubin

    2015-10-01

    Residual periodic errors detected in picometer-level heterodyne interferometers cannot be explained by the model based on double-frequency mixing. A new model is established and proposed in this paper for analysis of these errors. The multi-order Doppler frequency shift ghost beams from measurement beam itself are involved in final interference leading to multi-order periodic errors, whether or not frequency-mixing originating from the two incident beams occurs. For model validation, a novel setup free from double-frequency mixing is constructed. The analyzed measurement signal shows that phase mixing of measurement beam itself can lead to multi-order periodic errors ranging from tens of picometers to one nanometer. PMID:26480108

  7. Predictive double-layer modeling of metal sorption in mine-drainage systems

    SciTech Connect

    Smith, K.S.; Plumlee, G.S.; Ranville, J.F.; Macalady, D.L.

    1996-10-01

    Previous comparison of predictive double-layer modeling and empirically derived metal-partitioning data has validated the use of the double-layer model to predict metal sorption reactions in iron-rich mine-drainage systems. The double-layer model subsequently has been used to model data collected from several mine-drainage sites in Colorado with diverse geochemistry and geology. This work demonstrates that metal partitioning between dissolved and sediment phases can be predictively modeled simply by knowing the water chemistry and the amount of suspended iron-rich particulates present in the system. Sorption on such iron-rich suspended sediments appears to control metal and arsenic partitioning between dissolved and sediment phases, with sorption on bed sediment playing a limited role. At pH > 5, Pb and As are largely sorbed by iron-rich suspended sediments and Cu is partially sorbed; Zn, Cd, and Ni usually remain dissolved throughout the pH range of 3 to 8.

  8. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  9. A comparative study of the hypoplasticity and the fabric-dependent dilatant double shearing models for granular materials

    SciTech Connect

    Zhu, H.; Mehrabadi, M.; Massoudi, M.

    2007-04-25

    In this paper, we consider the mechanical response of granular materials and compare the predictions of a hypoplastic model with that of a recently developed dilatant double shearing model which includes the effects of fabric. We implement the constitutive relations of the dilatant double shearing model and the hypoplastic model in the finite element program ABACUS/Explicit and compare their predictions in the triaxial compression and cyclic shear loading tests. Although the origins and the constitutive relations of the double shearing model and the hypoplastic model are quite different, we find that both models are capable of capturing typical behaviours of granular materials. This is significant because while hypoplasticity is phenomenological in nature, the double shearing model is based on a kinematic hypothesis and microstructural considerations, and can easily be calibrated through standard tests.

  10. Modeling the double-trough structure observed in broad absorption line QSOs using radiative acceleration

    NASA Technical Reports Server (NTRS)

    Arav, Nahum; Begelman, Mitchell C.

    1994-01-01

    We present a model explaining the double trough, separated by delta v approximately = 5900 km/s, observed in the C IV lambda-1549 broad absorption line (BAL) in a number of BALQSOs. The model is based on radiative acceleration of the BAL outflow, and the troughs result from modulations in the radiative force. Specifically, where the strong flux from the Lyman-alpha lambda-1215 broad emission line is redshifted to the frequency of the N V lambda-1240 resonance line, in the rest frame of the accelerating N V ions, the acceleration increases and the absorption is reduced. At higher velocities the Lyman-alpha emission is redshifted out of the resonance and the N V ions experience a declining flux which causes the second absorption trough. A strongly nonlinear relationship between changes in the flux and the optical depth in the lines is shown to amplify the expected effect. This model produces double troughs for which the shallowest absorption between the two troughs occurs at v approximately = 5900 km/s. Indeed, we find that a substantial number of the observed objects show this feature. A prediction of the model is that all BALQSOs that show a double-trough signature will be found to have an intrinsic sharp drop in their spectra shortward of approximately 1200 A.

  11. Introducing electron capture into the unitary-convolution-approximation energy-loss theory at low velocities

    SciTech Connect

    Schiwietz, G.; Grande, P. L.

    2011-11-15

    Recent developments in the theoretical treatment of electronic energy losses of bare and screened ions in gases are presented. Specifically, the unitary-convolution-approximation (UCA) stopping-power model has proven its strengths for the determination of nonequilibrium effects for light as well as heavy projectiles at intermediate to high projectile velocities. The focus of this contribution will be on the UCA and its extension to specific projectile energies far below 100 keV/u, by considering electron-capture contributions at charge-equilibrium conditions.

  12. Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding

    PubMed Central

    Johnson, Rie; Zhang, Tong

    2016-01-01

    This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks. PMID:27087766

  13. Performance of DPSK with convolutional encoding on time-varying fading channels

    NASA Technical Reports Server (NTRS)

    Mui, S. Y.; Modestino, J. W.

    1977-01-01

    The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.

  14. Relation of the double-ITCZ bias to the atmospheric energy budget in climate models

    NASA Astrophysics Data System (ADS)

    Adam, Ori; Schneider, Tapio; Brient, Florent; Bischoff, Tobias

    2016-07-01

    We examine how tropical zonal mean precipitation biases in current climate models relate to the atmospheric energy budget. Both hemispherically symmetric and antisymmetric tropical precipitation biases contribute to the well-known double-Intertropical Convergence Zone (ITCZ) bias; however, they have distinct signatures in the energy budget. Hemispherically symmetric biases in tropical precipitation are proportional to biases in the equatorial net energy input; hemispherically antisymmetric biases are proportional to the atmospheric energy transport across the equator. Both relations can be understood within the framework of recently developed theories. Atmospheric net energy input biases in the deep tropics shape both the symmetric and antisymmetric components of the double-ITCZ bias. Potential causes of these energetic biases and their variation across climate models are discussed.

  15. Geodesic acoustic mode in anisotropic plasmas using double adiabatic model and gyro-kinetic equation

    SciTech Connect

    Ren, Haijun; Cao, Jintao

    2014-12-15

    Geodesic acoustic mode in anisotropic tokamak plasmas is theoretically analyzed by using double adiabatic model and gyro-kinetic equation. The bi-Maxwellian distribution function for guiding-center ions is assumed to obtain a self-consistent form, yielding pressures satisfying the magnetohydrodynamic (MHD) anisotropic equilibrium condition. The double adiabatic model gives the dispersion relation of geodesic acoustic mode (GAM), which agrees well with the one derived from gyro-kinetic equation. The GAM frequency increases with the ratio of pressures, p{sub ⊥}/p{sub ∥}, and the Landau damping rate is dramatically decreased by p{sub ⊥}/p{sub ∥}. MHD result shows a low-frequency zonal flow existing for all p{sub ⊥}/p{sub ∥}, while according to the kinetic dispersion relation, no low-frequency branch exists for p{sub ⊥}/p{sub ∥}≳ 2.

  16. Role of Double-Porosity Dual-Permeability Models for Multi-Resonance Geomechanical Systems

    SciTech Connect

    Berryman, J G

    2005-05-18

    It is known that Biot's equations of poroelasticity (Biot 1956; 1962) follow from a scale-up of the microscale equations of elasticity coupled to the Navier-Stokes equations for fluid flow (Burridge and Keller, 1981). Laboratory measurements by Plona (1980) have shown that Biot's equations indeed hold for simple systems (Berryman, 1980), but heterogeneous systems can have quite different behavior (Berryman, 1988). So the question arises whether there is one level--or perhaps many levels--of scale-up needed to arrive at equations valid for the reservoir scale? And if so, do these equations take the form of Biot's equations or some other form? We will discuss these issues and show that the double-porosity dual-permeability equations (Berryman and Wang, 1995; Berryman and Pride, 2002; Pride and Berryman, 2003a,b; Pride et al., 2004) play a special role in the scale-up to equations describing multi-resonance reservoir behavior, for fluid pumping and geomechanics, as well as seismic wave propagation. The reason for the special significance of double-porosity models is that a multi-resonance system can never be adequately modeled using a single resonance model, but can often be modeled with reasonable accuracy using a two-resonance model. Although ideally one would prefer to model multi-resonance systems using the correct numbers, locations, widths, and amplitudes of the resonances, data are often inadequate to resolve all these pertinent model parameters in this complex inversion task. When this is so, the double-porosity model is most useful as it permits us to capture the highest and lowest detectable resonances of the system and then to interpolate through the middle range of frequencies.

  17. Experiments and Modeling of Boric Acid Permeation through Double-Skinned Forward Osmosis Membranes.

    PubMed

    Luo, Lin; Zhou, Zhengzhong; Chung, Tai-Shung; Weber, Martin; Staudt, Claudia; Maletzko, Christian

    2016-07-19

    Boron removal is one of the great challenges in modern wastewater treatment, owing to the unique small size and fast diffusion rate of neutral boric acid molecules. As forward osmosis (FO) membranes with a single selective layer are insufficient to reject boron, double-skinned FO membranes with boron rejection up to 83.9% were specially designed for boron permeation studies. The superior boron rejection properties of double-skinned FO membranes were demonstrated by theoretical calculations, and verified by experiments. The double-skinned FO membrane was fabricated using a sulfonated polyphenylenesulfone (sPPSU) polymer as the hydrophilic substrate and polyamide as the selective layer material via interfacial polymerization on top and bottom surfaces. A strong agreement between experimental data and modeling results validates the membrane design and confirms the success of model prediction. The effects of key parameters on boron rejection, such as boron permeability of both selective layers and structure parameter, were also investigated in-depth with the mathematical modeling. This study may provide insights not only for boron removal from wastewater, but also open up the design of next generation FO membranes to eliminate low-rejection molecules in wider applications. PMID:27280490

  18. [Verification of the double dissociation model of shyness using the implicit association test].

    PubMed

    Fujii, Tsutomu; Aikawa, Atsushi

    2013-12-01

    The "double dissociation model" of shyness proposed by Asendorpf, Banse, and Mtücke (2002) was demonstrated in Japan by Aikawa and Fujii (2011). However, the generalizability of the double dissociation model of shyness was uncertain. The present study examined whether the results reported in Aikawa and Fujii (2011) would be replicated. In Study 1, college students (n = 91) completed explicit self-ratings of shyness and other personality scales. In Study 2, forty-eight participants completed IAT (Implicit Association Test) for shyness, and their friends (n = 141) rated those participants on various personality scales. The results revealed that only the explicit self-concept ratings predicted other-rated low praise-seeking behavior, sociable behavior and high rejection-avoidance behavior (controlled shy behavior). Only the implicit self-concept measured by the shyness IAT predicted other-rated high interpersonal tension (spontaneous shy behavior). The results of this study are similar to the findings of the previous research, which supports generalizability of the double dissociation model of shyness. PMID:24505980

  19. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double-layer potential structure. A simple model is presented in which this current re-distribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double-layer potential. The flank charging may be represented as that of a nonlinear transmission. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a 1-d simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  20. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer (DL) in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double layer potential structure. A simple model is presented in which this current redistribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double layer potential. The flank charging may be represented as that of a nonlinear transmission line. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a one-dimensional simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  1. Dystrophin and Dysferlin Double Mutant Mice: A Novel Model For Rhabdomyosarcoma

    PubMed Central

    Hosur, Vishnu; Kavirayani, Anoop; Riefler, Jennifer; Carney, Lisa M.B.; Lyons, Bonnie; Gott, Bruce; Cox, Gregory A.; Shultz, Leonard D.

    2012-01-01

    While researchers are yet to establish a link a between muscular dystrophy (MD) and sarcomas in human patients, literature suggests that MD genes dystrophin and dysferlin act as tumor suppressor genes in mouse models of MD. For instance, dystrophin deficient mdx and dysferlin deficient A/J mice, models of human Duchenne Muscular Dystrophy and Limb Girdle Muscular Dystrophy type 2B, respectively, develop mixed sarcomas with variable penetrance and latency. To further establish the correlation between MD and sarcoma development, and to test whether a combined deletion of dystrophin and dysferlin exacerbates MD and augments the incidence of sarcomas, we generated dystrophin and dysferlin double mutant mice (STOCK-Dysfprmd Dmdmdx-5Cv). Not surprisingly, the double mutant mice develop severe MD symptoms and moreover develop rhabdomyosarcoma at an average age of 12 months, with an incidence of > 90%. Histological and immunohistochemical analyses, using a panel of antibodies against skeletal muscle cell proteins, electron microscopy, cytogenetics, and molecular analysis reveal that the double mutant mice develop rhabdomyosarcoma. The present finding bolsters the correlation between MD and sarcomas, and provides a model not only to examine the cellular origins but also to identify mechanisms and signal transduction pathways triggering development of RMS. PMID:22682622

  2. The role of convective model choice in calculating the climate impact of doubling CO2

    NASA Technical Reports Server (NTRS)

    Lindzen, R. S.; Hou, A. Y.; Farrell, B. F.

    1982-01-01

    The role of the parameterization of vertical convection in calculating the climate impact of doubling CO2 is assessed using both one-dimensional radiative-convective vertical models and in the latitude-dependent Hadley-baroclinic model of Lindzen and Farrell (1980). Both the conventional 6.5 K/km and the moist-adiabat adjustments are compared with a physically-based, cumulus-type parameterization. The model with parameterized cumulus convection has much less sensitivity than the 6.5 K/km adjustment model at low latitudes, a result that can be to some extent imitiated by the moist-adiabat adjustment model. However, when averaged over the globe, the use of the cumulus-type parameterization in a climate model reduces sensitivity only approximately 34% relative to models using 6.5 K/km convective adjustment. Interestingly, the use of the cumulus-type parameterization appears to eliminate the possibility of a runaway greenhouse.

  3. Double-blind comparison of survival analysis models using a bespoke web system.

    PubMed

    Taktak, A F G; Setzkorn, C; Damato, B E

    2006-01-01

    The aim of this study was to carry out a comparison of different linear and non-linear models from different centres on a common dataset in a double-blind manner to eliminate bias. The dataset was shared over the Internet using a secure bespoke environment called geoconda. Models evaluated included: (1) Cox model, (2) Log Normal model, (3) Partial Logistic Spline, (4) Partial Logistic Artificial Neural Network and (5) Radial Basis Function Networks. Graphical analysis of the various models with the Kaplan-Meier values were carried out in 3 survival groups in the test set classified according to the TNM staging system. The discrimination value for each model was determined using the area under the ROC curve. Results showed that the Cox model tended towards optimism whereas the partial logistic Neural Networks showed slight pessimism. PMID:17945716

  4. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  5. a Convolutional Network for Semantic Facade Segmentation and Interpretation

    NASA Astrophysics Data System (ADS)

    Schmitz, Matthias; Mayer, Helmut

    2016-06-01

    In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.

  6. Study on Expansion of Convolutional Compactors over Galois Field

    NASA Astrophysics Data System (ADS)

    Arai, Masayuki; Fukumoto, Satoshi; Iwasaki, Kazuhiko

    Convolutional compactors offer a promising technique of compacting test responses. In this study we expand the architecture of convolutional compactor onto a Galois field in order to improve compaction ratio as well as reduce X-masking probability, namely, the probability that an error is masked by unknown values. While each scan chain is independently connected by EOR gates in the conventional arrangement, the proposed scheme treats q signals as an element over GF(2q), and the connections are configured on the same field. We show the arrangement of the proposed compactors and the equivalent expression over GF(2). We then evaluate the effectiveness of the proposed expansion in terms of X-masking probability by simulations with uniform distribution of X-values, as well as reduction of hardware overheads. Furthermore, we evaluate a multi-weight arrangement of the proposed compactors for non-uniform X distributions.

  7. Image Super-Resolution Using Deep Convolutional Networks.

    PubMed

    Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou

    2016-02-01

    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. PMID:26761735

  8. Two-parameter double-oscillator model of Mathews-Lakshmanan type: Series solutions and supersymmetric partners

    SciTech Connect

    Schulze-Halberg, Axel E-mail: xbataxel@gmail.com; Wang, Jie

    2015-07-15

    We obtain series solutions, the discrete spectrum, and supersymmetric partners for a quantum double-oscillator system. Its potential features a superposition of the one-parameter Mathews-Lakshmanan interaction and a one-parameter harmonic or inverse harmonic oscillator contribution. Furthermore, our results are transferred to a generalized Pöschl-Teller model that is isospectral to the double-oscillator system.

  9. Face Detection Using GPU-Based Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Nasse, Fabian; Thurau, Christian; Fink, Gernot A.

    In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.

  10. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  11. Fine-grained representation learning in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  12. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  13. A Discriminative Representation of Convolutional Features for Indoor Scene Recognition

    NASA Astrophysics Data System (ADS)

    Khan, Salman H.; Hayat, Munawar; Bennamoun, Mohammed; Togneri, Roberto; Sohel, Ferdous A.

    2016-07-01

    Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities. This paper presents a novel approach which exploits rich mid-level convolutional features to categorize indoor scenes. Traditionally used convolutional features preserve the global spatial structure, which is a desirable property for general object recognition. However, we argue that this structuredness is not much helpful when we have large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target dataset, but it also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale dataset of 1300 object categories which are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over previous state of the art approaches on five major scene classification datasets.

  14. Vibro-acoustic modelling of aircraft double-walls with structural links using Statistical Energy Analysis

    NASA Astrophysics Data System (ADS)

    Campolina, Bruno L.

    The prediction of aircraft interior noise involves the vibroacoustic modelling of the fuselage with noise control treatments. This structure is composed of a stiffened metallic or composite panel, lined with a thermal and acoustic insulation layer (glass wool), and structurally connected via vibration isolators to a commercial lining panel (trim). The goal of this work aims at tailoring the noise control treatments taking design constraints such as weight and space optimization into account. For this purpose, a representative aircraft double-wall is modelled using the Statistical Energy Analysis (SEA) method. Laboratory excitations such as diffuse acoustic field and point force are addressed and trends are derived for applications under in-flight conditions, considering turbulent boundary layer excitation. The effect of the porous layer compression is firstly addressed. In aeronautical applications, compression can result from the installation of equipment and cables. It is studied analytically and experimentally, using a single panel and a fibrous uniformly compressed over 100% of its surface. When compression increases, a degradation of the transmission loss up to 5 dB for a 50% compression of the porous thickness is observed mainly in the mid-frequency range (around 800 Hz). However, for realistic cases, the effect should be reduced since the compression rate is lower and compression occurs locally. Then the transmission through structural connections between panels is addressed using a four-pole approach that links the force-velocity pair at each side of the connection. The modelling integrates experimental dynamic stiffness of isolators, derived using an adapted test rig. The structural transmission is then experimentally validated and included in the double-wall SEA model as an equivalent coupling loss factor (CLF) between panels. The tested structures being flat, only axial transmission is addressed. Finally, the dominant sound transmission paths are

  15. Double-stranded DNA organization in bacteriophage heads: An alternative toroid-based model

    SciTech Connect

    Hud, N.V.

    1995-10-01

    Studies of the organization of double-stranded DNA within bacteriophage heads during the past four decades have produced a wealth of data. However, despite the presentation of numerous models, the true organization of DNA within phage heads remains unresolved. The observations of toroidal DNA structures in electron micrographs of phage lysates have long been cited as support for the organization of DNA in a spool-like fashion. This particular model, like all other models, has not been found to be consistent with all available data. Recently, the authors proposed that DNA within toroidal condensates produced in vitro is organized in a manner significantly different from that suggested by the spool model. This new toroid model has allowed the development of an alternative model for DNA organization within bacteriophage heads that is consistent with a wide range of biophysical data. Here the authors propose that bacteriophage DNA is packaged in a toroid that is folded into a highly compact structure.

  16. The tropospheric moisture and double-ITCZ biases in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Tian, B.

    2014-12-01

    Based on Atmospheric Infrared Sunder (AIRS) Obs4MIPs data, Tian et al. (2013) evaluated the climatological mean tropospheric air temperature and specific humidity simulations in Phase 5 of the Coupled Model Intercomparison Project (CMIP5) models. They found that most CMIP5 models have a cold bias in the extratropical upper troposphere and a double-Intertropical Convergence Zone (ITCZ) bias in the whole troposphere over the tropical Pacific. They also pointed out the cloud-related sampling biases in the AIRS Obs4MIPs air temperature and specific humidity climatologies that were later quantified by Hearty et al. (2014). In this study, we will continue comparing the tropospheric specific humidity climatologies between the CMIP5 model simulations and the AIRS Obs4MIPs data after correcting the AIRS data sampling biases to quantify the overall tropospheric moist or dry bias of CMIP5 models. In particular, we will quantify the strength of the double-ITCZ bias in each individual CMIP5 model and discuss its possible implication for climate sensitivity and climate prediction.

  17. Simulation of double layers in a model auroral circuit with nonlinear impedance

    NASA Technical Reports Server (NTRS)

    Smith, R. A.

    1986-01-01

    A reduced circuit description of the U-shaped potential structure of a discrete auroral arc, consisting of the flank transmission line plus parallel-electric-field region, is used to provide the boundary condition for one-dimensional simulations of the double-layer evolution. The model yields asymptotic scalings of the double-layer potential, as a function of an anomalous transport coefficient alpha and of the perpendicular length scale l(a) of the arc. The arc potential phi(DL) scales approximately linearly with alpha, and for alpha fixed phi (DL) about l(a) to the z power. Using parameters appropriate to the auroral zone acceleration region, potentials of phi (DPL) 10 kV scale to projected ionospheric dimensions of about 1 km, with power flows of the order of magnitude of substorm dissipation rates.

  18. Ozone response to a CO2 doubling - Results from a stratospheric circulation model with heterogeneous chemistry

    NASA Technical Reports Server (NTRS)

    Pitari, G.; Palermi, S.; Visconti, G.; Prinn, R. G.

    1992-01-01

    A spectral 3D model of the stratosphere has been used to study the sensitivity of polar ozone with respect to a carbon dioxide increase. The lower stratospheric cooling associated with an imposed CO2 doubling may increase the probability of polar stratospheric cloud (PSC) formation and this affect ozone. The ozone perturbation obtained with the inclusion of a simple parameterization for heterogeneous chemistry on PSCs is compared to that relative to a pure homogeneous chemistry. In both cases the temperature perturbation is determined by a CO2 doubling, while the total chlorine content is kept at the present level. It is shown that the lower temperature may increase the depth and the extension of the ozone hole by extending the area amenable to PSC formation. It may be argued that this effect, coupled with an increasing amount of chlorine, may produce a positive feedback on the ozone destruction.

  19. Communication: Double-hybrid functionals from adiabatic-connection: The QIDH model

    NASA Astrophysics Data System (ADS)

    Brémond, Éric; Sancho-García, Juan Carlos; Pérez-Jiménez, Ángel José; Adamo, Carlo

    2014-07-01

    A new approach stemming from the adiabatic-connection (AC) formalism is proposed to derive parameter-free double-hybrid (DH) exchange-correlation functionals. It is based on a quadratic form that models the integrand of the coupling parameter, whose components are chosen to satisfy several well-known limiting conditions. Its integration leads to DHs containing a single parameter controlling the amount of exact exchange, which is determined by requiring it to depend on the weight of the MP2 correlation contribution. Two new parameter-free DHs functionals are derived in this way, by incorporating the non-empirical PBE and TPSS functionals in the underlying expression. Their extensive testing using the GMTKN30 benchmark indicates that they are in competition with state-of-the-art DHs, yet providing much better self-interaction errors and opening a new avenue towards the design of accurate double-hybrid exchange-correlation functionals departing from the AC integrand.

  20. Kinetic model for an auroral double layer that spans many gravitational scale heights

    SciTech Connect

    Robertson, Scott

    2014-12-15

    The electrostatic potential profile and the particle densities of a simplified auroral double layer are found using a relaxation method to solve Poisson's equation in one dimension. The electron and ion distribution functions for the ionosphere and magnetosphere are specified at the boundaries, and the particle densities are found from a collisionless kinetic model. The ion distribution function includes the gravitational potential energy; hence, the unperturbed ionospheric plasma has a density gradient. The plasma potential at the upper boundary is given a large negative value to accelerate electrons downward. The solutions for a wide range of dimensionless parameters show that the double layer forms just above a critical altitude that occurs approximately where the ionospheric density has fallen to the magnetospheric density. Below this altitude, the ionospheric ions are gravitationally confined and have the expected scale height for quasineutral plasma in gravity.

  1. A double layer model for solar X-ray and microwave pulsations

    NASA Technical Reports Server (NTRS)

    Tapping, K. F.

    1986-01-01

    The wide range of wavelengths over which quasi-periodic pulsations have been observed suggests that the mechanism causing them acts upon the supply of high energy electrons driving the emission processes. A model is described which is based upon the radial shrinkage of a magnetic flux tube. The concentration of the current, along with the reduction in the number of available charge carriers, can rise to a condition where the current demand exceeds the capacity of the thermal electrons. Driven by the large inductance of the external current circuit, an instability takes place in the tube throat, resulting in the formation of a potential double layer, which then accelerates electrons and ions to MeV energies. The double layer can be unstable, collapsing and reforming repeatedly. The resulting pulsed particle beams give rise to pulsating emission which are observed at radio and X-ray wavelengths.

  2. New non-equilibrium matrix imbibition equation for double porosity model

    NASA Astrophysics Data System (ADS)

    Konyukhov, Andrey; Pankratov, Leonid

    2016-07-01

    The paper deals with the global Kondaurov double porosity model describing a non-equilibrium two-phase immiscible flow in fractured-porous reservoirs when non-equilibrium phenomena occur in the matrix blocks, only. In a mathematically rigorous way, we show that the homogenized model can be represented by usual equations of two-phase incompressible immiscible flow, except for the addition of two source terms calculated by a solution to a local problem being a boundary value problem for a non-equilibrium imbibition equation given in terms of the real saturation and a non-equilibrium parameter.

  3. Primary reasoning behind the double ITCZ phenomenon in a coupled ocean-atmosphere general circulation model

    NASA Astrophysics Data System (ADS)

    Li, Jianglong; Zhang, Xuehong; Yu, Yongqiang; Dai, Fushan

    2004-12-01

    This paper investigates the processes behind the double ITCZ phenomenon, a common problem in Coupled ocean-atmosphere General Circulation Models (CGCMs), using a CGCM—FGCM-0 (Flexible General Circulation Model, version 0). The double ITCZ mode develops rapidly during the first two years of the integration and becomes a perennial phenomenon afterwards in the model. By way of Singular Value Decomposition (SVD) for SST, sea surface pressure, and sea surface wind, some air-sea interactions are analyzed. These interactions prompt the anomalous signals that appear at the beginning of the coupling to develop rapidly. There are two possible reasons, proved by sensitivity experiments: (1) the overestimated east-west gradient of SST in the equatorial Pacific in the ocean spin-up process, and (2) the underestimated amount of low-level stratus over the Peruvian coast in CCM3 (the Community Climate Model, Version Three). The overestimated east-west gradient of SST brings the anomalous equatorial easterly. The anomalous easterly, affected by the Coriolis force in the Southern Hemisphere, turns into an anomalous westerly in a broad area south of the equator and is enhanced by atmospheric anomalous circulation due to the underestimated amount of low-level stratus over the Peruvian coast simulated by CCM3. The anomalous westerly leads to anomalous warm advection that makes the SST warm in the southeast Pacific. The double ITCZ phenomenon in the CGCM is a result of a series of nonlocal and nonlinear adjustment processes in the coupled system, which can be traced to the uncoupled models, oceanic component, and atmospheric component. The zonal gradient of the equatorial SST is too large in the ocean component and the amount of low-level stratus over the Peruvian coast is too low in the atmosphere component.

  4. Compact model for short-channel symmetric double-gate junctionless transistors

    NASA Astrophysics Data System (ADS)

    Ávila-Herrera, F.; Cerdeira, A.; Paz, B. C.; Estrada, M.; Íñiguez, B.; Pavanello, M. A.

    2015-09-01

    In this work a compact analytical model for short-channel double-gate junctionless transistor is presented, considering variable mobility and the main short-channel effects as threshold voltage roll-off, series resistance, drain saturation voltage, channel shortening and saturation velocity. The threshold voltage shift and subthreshold slope variation is determined through the minimum value of the potential in the channel. Only eight model parameters are used. The model is physically-based, considers the total charge in the Si layer and the operating conditions in both depletion and accumulation. Model is validated by 2D simulations in ATLAS for channel lengths from 25 nm to 500 nm and for doping concentrations of 5 × 1018 and 1 × 1019 cm-3, as well as for Si layer thickness of 10 and 15 nm, in order to guarantee normally-off operation of the transistors. The model provides an accurate continuous description of the transistor behavior in all operating regions.

  5. Fabrication of double-walled section models of the ITER vacuum vessel

    SciTech Connect

    Koizumi, K.; Kanamori, N.; Nakahira, M.; Itoh, Y.; Horie, M.; Tada, E.; Shimamoto, S.

    1995-12-31

    Trial fabrication of double-walled section models has been performed at Japan Atomic Energy Research Institute (JAERI) for the construction of ITER vacuum vessel. By employing TIG (Tungsten-arc Inert Gas) welding and EB (Electron Beam) welding, for each model, two full-scaled section models of 7.5 {degree} toroidal sector in the curved section at the bottom of vacuum vessel have been successfully fabricated with the final dimensional error of within {+-}5 mm to the nominal values. The sufficient technical database on the candidate fabrication procedures, welding distortion and dimensional stability of full-scaled models have been obtained through the fabrications. This paper describes the design and fabrication procedures of both full-scaled section models and the major results obtained through the fabrication.

  6. Accuracy assessment of single and double difference models for the single epoch GPS compass

    NASA Astrophysics Data System (ADS)

    Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian

    2012-02-01

    The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.

  7. The convoluted evolution of snail chirality

    NASA Astrophysics Data System (ADS)

    Schilthuizen, M.; Davison, A.

    2005-11-01

    The direction that a snail (Mollusca: Gastropoda) coils, whether dextral (right-handed) or sinistral (left-handed), originates in early development but is most easily observed in the shell form of the adult. Here, we review recent progress in understanding snail chirality from genetic, developmental and ecological perspectives. In the few species that have been characterized, chirality is determined by a single genetic locus with delayed inheritance, which means that the genotype is expressed in the mother's offspring. Although research lags behind the studies of asymmetry in the mouse and nematode, attempts to isolate the loci involved in snail chirality have begun, with the final aim of understanding how the axis of left-right asymmetry is established. In nature, most snail taxa (>90%) are dextral, but sinistrality is known from mutant individuals, populations within dextral species, entirely sinistral species, genera and even families. Ordinarily, it is expected that strong frequency-dependent selection should act against the establishment of new chiral types because the chiral minority have difficulty finding a suitable mating partner (their genitalia are on the ‘wrong’ side). Mixed populations should therefore not persist. Intriguingly, however, a very few land snail species, notably the subgenus Amphidromus sensu stricto, not only appear to mate randomly between different chiral types, but also have a stable, within-population chiral dimorphism, which suggests the involvement of a balancing factor. At the other end of the spectrum, in many species, different chiral types are unable to mate and so could be reproductively isolated from one another. However, while empirical data, models and simulations have indicated that chiral reversal must sometimes occur, it is rarely likely to lead to so-called ‘single-gene’ speciation. Nevertheless, chiral reversal could still be a contributing factor to speciation (or to divergence after speciation) when

  8. Coordinated regulation of TRPV5-mediated Ca²⁺ transport in primary distal convolution cultures.

    PubMed

    van der Hagen, Eline A E; Lavrijsen, Marla; van Zeeland, Femke; Praetorius, Jeppe; Bonny, Olivier; Bindels, René J M; Hoenderop, Joost G J

    2014-11-01

    Fine-tuning of renal calcium ion (Ca(2+)) reabsorption takes place in the distal convoluted and connecting tubules (distal convolution) of the kidney via transcellular Ca(2+) transport, a process controlled by the epithelial Ca(2+) channel Transient Receptor Potential Vanilloid 5 (TRPV5). Studies to delineate the molecular mechanism of transcellular Ca(2+) transport are seriously hampered by the lack of a suitable cell model. The present study describes the establishment and validation of a primary murine cell model of the distal convolution. Viable kidney tubules were isolated from mice expressing enhanced Green Fluorescent Protein (eGFP) under the control of a TRPV5 promoter (pTRPV5-eGFP), using Complex Object Parametric Analyser and Sorting (COPAS) technology. Tubules were grown into tight monolayers on semi-permeable supports. Radioactive (45)Ca(2+) assays showed apical-to-basolateral transport rates of 13.5 ± 1.2 nmol/h/cm(2), which were enhanced by the calciotropic hormones parathyroid hormone and 1,25-dihydroxy vitamin D3. Cell cultures lacking TRPV5, generated by crossbreeding pTRPV5-eGFP with TRPV5 knockout mice (TRPV5(-/-)), showed significantly reduced transepithelial Ca(2+) transport (26 % of control), for the first time directly confirming the key role of TRPV5. Most importantly, using this cell model, a novel molecular player in transepithelial Ca(2+) transport was identified: mRNA analysis revealed that ATP-dependent Ca(2+)-ATPase 4 (PMCA4) instead of PMCA1 was enriched in isolated tubules and downregulated in TRPV5(-/-) material. Immunohistochemical stainings confirmed co-localization of PMCA4 with TRPV5 in the distal convolution. In conclusion, a novel primary cell model with TRPV5-dependent Ca(2+) transport characteristics was successfully established, enabling comprehensive studies of transcellular Ca(2+) transport. PMID:24557712

  9. Combining double-difference relocation with regional depth-phase modelling to improve hypocentre accuracy

    NASA Astrophysics Data System (ADS)

    Ma, Shutian; Eaton, David W.

    2011-05-01

    Precise and accurate earthquake hypocentres are critical for various fields, such as the study of tectonic process and seismic-hazard assessment. Double-difference relocation methods are widely used and can dramatically improve the precision of event relative locations. In areas of sparse seismic network coverage, however, a significant trade-off exists between focal depth, epicentral location and the origin time. Regional depth-phase modelling (RDPM) is suitable for sparse networks and can provide focal-depth information that is relatively insensitive to uncertainties in epicentral location and independent of errors in the origin time. Here, we propose a hybrid method in which focal depth is determined using RDPM and then treated as a fixed parameter in subsequent double-difference calculations, thus reducing the size of the system of equations and increasing the precision of the hypocentral solutions. Based on examples using small earthquakes from eastern Canada and southwestern USA, we show that the application of this technique yields solutions that appear to be more robust and accurate than those obtained by standard double-difference relocation method alone.

  10. Numerical Well Testing Interpretation Model and Applications in Crossflow Double-Layer Reservoirs by Polymer Flooding

    PubMed Central

    Guo, Hui; He, Youwei; Li, Lei; Du, Song; Cheng, Shiqing

    2014-01-01

    This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR). PMID:25302335