A digital model for streamflow routing by convolution methods
Doyle, W.H., Jr.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.
1984-01-01
U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)
A convolution model of rock bed thermal storage units
NASA Astrophysics Data System (ADS)
Sowell, E. F.; Curry, R. L.
1980-01-01
A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.
NASA Astrophysics Data System (ADS)
Martin, Roland; Komatitsch, Dimitri; Bruthiaux, Emilien; Gedney, Stephen D.
2010-05-01
We present and discuss here two different unsplit formulations of the frequency shift PML based on convolution or non convolution integrations of auxiliary memory variables. Indeed, the Perfectly Matched Layer absorbing boundary condition has proven to be very efficient from a numerical point of view for the elastic wave equation to absorb both body waves with non-grazing incidence and surface waves. However, at grazing incidence the classical discrete Perfectly Matched Layer method suffers from large spurious reflections that make it less efficient for instance in the case of very thin mesh slices, in the case of sources located very close to the edge of the mesh, and/or in the case of receivers located at very large offset. In [1] we improve the Perfectly Matched Layer at grazing incidence for the seismic wave equation based on an unsplit convolution technique. This improved PML has a cost that is similar in terms of memory storage to that of the classical PML. We illustrate the efficiency of this improved Convolutional Perfectly Matched Layer based on numerical benchmarks using a staggered finite-difference method on a very thin mesh slice for an isotropic material and show that results are significantly improved compared with the classical Perfectly Matched Layer technique. We also show that, as the classical model, the technique is intrinsically unstable in the case of some anisotropic materials. In this case, retaining an idea of [2], this has been stabilized by adding correction terms adequately along any coordinate axis [3]. More specifically this has been applied to the spectral-element method based on a hybrid first/second order time integration scheme in which the Newmark time marching scheme allows us to match perfectly at the base of the absorbing layer a velocity-stress formulation in the PML and a second order displacement formulation in the inner computational domain.Our CPML unsplit formulation has the advantage to reduce the memory storage of CPML
Forecasting natural aquifer discharge using a numerical model and convolution.
Boggs, Kevin G; Johnson, Gary S; Van Kirk, Rob; Fairley, Jerry P
2014-01-01
If the nature of groundwater sources and sinks can be determined or predicted, the data can be used to forecast natural aquifer discharge. We present a procedure to forecast the relative contribution of individual aquifer sources and sinks to natural aquifer discharge. Using these individual aquifer recharge components, along with observed aquifer heads for each January, we generate a 1-year, monthly spring discharge forecast for the upcoming year with an existing numerical model and convolution. The results indicate that a forecast of natural aquifer discharge can be developed using only the dominant aquifer recharge sources combined with the effects of aquifer heads (initial conditions) at the time the forecast is generated. We also estimate how our forecast will perform in the future using a jackknife procedure, which indicates that the future performance of the forecast is good (Nash-Sutcliffe efficiency of 0.81). We develop a forecast and demonstrate important features of the procedure by presenting an application to the Eastern Snake Plain Aquifer in southern Idaho. PMID:23914881
A staggered-grid convolutional differentiator for elastic wave modelling
NASA Astrophysics Data System (ADS)
Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun
2015-11-01
The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost. PMID:25698012
Gamma convolution models for self-diffusion coefficient distributions in PGSE NMR
NASA Astrophysics Data System (ADS)
Röding, Magnus; Williamson, Nathan H.; Nydén, Magnus
2015-12-01
We introduce a closed-form signal attenuation model for pulsed-field gradient spin echo (PGSE) NMR based on self-diffusion coefficient distributions that are convolutions of n gamma distributions, n ⩾ 1 . Gamma convolutions provide a general class of uni-modal distributions that includes the gamma distribution as a special case for n = 1 and the lognormal distribution among others as limit cases when n approaches infinity. We demonstrate the usefulness of the gamma convolution model by simulations and experimental data from samples of poly(vinyl alcohol) and polystyrene, showing that this model provides goodness of fit superior to both the gamma and lognormal distributions and comparable to the common inverse Laplace transform.
Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.
Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca
2016-01-01
Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration. PMID:26208308
Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)
NASA Astrophysics Data System (ADS)
Long, A. J.
2009-12-01
Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.
Fully 3D Particle-in-Cell Simulation of Double Post-Hole Convolute on PTS Facility
NASA Astrophysics Data System (ADS)
Zhao, Hailong; Dong, Ye; Zhou, Haijing; Zou, Wenkang; Institute of Fluid Physics Collaboration; Institute of Applied Physics; Computational Mathematics Collaboration
2015-11-01
In order to get better understand of energy transforming and converging process during High Energy Density Physics (HEDP) experiments, fully 3D particle-in-cell (PIC) simulation code NEPTUNE3D was used to provide numerical approach towards parameters which could hardly be acquired through diagnostics. Cubic region (34cm × 34cm × 18cm) including the double post-hole convolute (DPHC) on the primary test stand (PTS) facility was chosen to perform a series of fully 3D PIC simulations, calculating ability of codes were tested and preliminary simulation results about DPHC on PTS facility were discussed. Taking advantages of 3D simulation codes and large-scale parallel computation, massive data (~ 250GB) could be acquired in less than 5 hours and clear process of current transforming and electron emission in DPHC were demonstrated with the help of visualization tools. Cold-chamber tests were performed during which only cathode electron emission was considered without temperature rise or ion emission, current loss efficiency was estimated to be 0.46% ~ 0.48% by comparisons between output magnetic field profiles with or without electron emission. Project supported by the National Natural Science Foundation of China (Grant No. 11205145, 11305015, 11475155).
The Luminous Convolution Model for Galaxy Rotation Curves
NASA Astrophysics Data System (ADS)
Rubin, Shanon; Mucci, Maria; Sophia Cisneros Collaboration; Kennard Chng Collaboration; Meagan Crowley Collaboration
2016-03-01
The LCM takes as input only the observed luminous matter profile from galaxies, and allows us to confirm these observed data by considering frame-dependent effects from the luminous mass profile of the Milky Way. The LCM is useful when looking at galaxies that have similar total enclosed mass, but varying distributions. For example, variations in luminous matter profiles from a diffuse galaxy correlate to the LCM's five different Milky Way models equally well, but LCM fits for a centrally condensed galaxy distinguish between Milky Way models. In this presentation, we show how the rotation curve data of such galaxies can be used to constrain the Milky Way luminous mass modeling, by the physical characteristics of each galaxy used to interpret the fitting. Current Investigations will be presented showing how the convolved parameters of Keplerian predictions with rotation curve observations can be extracted with respect to the crossing location of the relative curvature versus the assumption of the luminous mass profiles from photometry. Since there currently exists no direct constraint to photometric estimates of the luminous mass in these systems, the LCM gives the first constraint based on the orthogonal measurement of Doppler shifted spectra from characteristic emitters.
Vehicle detection based on visual saliency and deep sparse convolution hierarchical model
NASA Astrophysics Data System (ADS)
Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long
2016-06-01
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2016-06-01
In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.
Bammer, Roland; Stollberger, Rudolf
2012-01-01
Counterexamples are used to motivate the revision of the established theory of tracer transport. Then dynamic contrast enhanced magnetic resonance imaging in particular is conceptualized in terms of a fully distributed convection–diffusion model from which a widely used convolution model is derived using, alternatively, compartmental discretizations or semigroup theory. On this basis, applications and limitations of the convolution model are identified. For instance, it is proved that perfusion and tissue exchange states cannot be identified on the basis of a single convolution equation alone. Yet under certain assumptions, particularly that flux is purely convective at the boundary of a tissue region, physiological parameters such as mean transit time, effective volume fraction, and volumetric flow rate per unit tissue volume can be deduced from the kernel. PMID:17429633
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd
2011-01-15
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3
SU-E-T-08: A Convolution Model for Head Scatter Fluence in the Intensity Modulated Field
Chen, M; Mo, X; Chen, Y; Parnell, D; Key, S; Olivera, G; Galmarini, W; Lu, W
2014-06-01
Purpose: To efficiently calculate the head scatter fluence for an arbitrary intensity-modulated field with any source distribution using the source occlusion model. Method: The source occlusion model with focal and extra focal radiation (Jaffray et al, 1993) can be used to account for LINAC head scatter. In the model, the fluence map of any field shape at any point can be calculated via integration of the source distribution within the visible range, as confined by each segment, using the detector eye's view. A 2D integration would be required for each segment and each fluence plane point, which is time-consuming, as an intensity-modulated field contains typically tens to hundreds of segments. In this work, we prove that the superposition of the segmental integrations is equivalent to a simple convolution regardless of what the source distribution is. In fact, for each point, the detector eye's view of the field shape can be represented as a function with the origin defined at the point's pinhole reflection through the center of the collimator plane. We were thus able to reduce hundreds of source plane integration to one convolution. We calculated the fluence map for various 3D and IMRT beams and various extra-focal source distributions using both the segmental integration approach and the convolution approach and compared the computation time and fluence map results of both approaches. Results: The fluence maps calculated using the convolution approach were the same as those calculated using the segmental approach, except for rounding errors (<0.1%). While it took considerably longer time to calculate all segmental integrations, the fluence map calculation using the convolution approach took only ∼1/3 of the time for typical IMRT fields with ∼100 segments. Conclusions: The convolution approach for head scatter fluence calculation is fast and accurate and can be used to enhance the online process.
Dose convolution filter: Incorporating spatial dose information into tissue response modeling
Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay
2010-03-15
Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.
The Luminous Convolution Model-The light side of dark matter
NASA Astrophysics Data System (ADS)
Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna
2014-03-01
We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.
Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng
2016-01-01
This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538
Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng
2016-01-01
This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538
NASA Astrophysics Data System (ADS)
Starn, J. J.
2013-12-01
Particle tracking often is used to generate particle-age distributions that are used as impulse-response functions in convolution. A typical application is to produce groundwater solute breakthrough curves (BTC) at endpoint receptors such as pumping wells or streams. The commonly used semi-analytical particle-tracking algorithm based on the assumption of linear velocity gradients between opposing cell faces is computationally very fast when used in combination with finite-difference models. However, large gradients near pumping wells in regional-scale groundwater-flow models often are not well represented because of cell-size limitations. This leads to inaccurate velocity fields, especially at weak sinks. Accurate analytical solutions for velocity near a pumping well are available, and various boundary conditions can be imposed using image-well theory. Python can be used to embed these solutions into existing semi-analytical particle-tracking codes, thereby maintaining the integrity and quality-assurance of the existing code. Python (and associated scientific computational packages NumPy, SciPy, and Matplotlib) is an effective tool because of its wide ranging capability. Python text processing allows complex and database-like manipulation of model input and output files, including binary and HDF5 files. High-level functions in the language include ODE solvers to solve first-order particle-location ODEs, Gaussian kernel density estimation to compute smooth particle-age distributions, and convolution. The highly vectorized nature of NumPy arrays and functions minimizes the need for computationally expensive loops. A modular Python code base has been developed to compute BTCs using embedded analytical solutions at pumping wells based on an existing well-documented finite-difference groundwater-flow simulation code (MODFLOW) and a semi-analytical particle-tracking code (MODPATH). The Python code base is tested by comparing BTCs with highly discretized synthetic steady
Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2010-01-01
In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it
NASA Astrophysics Data System (ADS)
Long, Andrew J.; Putnam, Larry D.
2009-10-01
SummaryConvolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium ( 3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.
Long, A.J.; Putnam, L.D.
2009-01-01
Convolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium (3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.
Gyöngy, Miklós; Makra, Ákos
2015-06-01
The shift-invariant convolution model of ultrasound is widely used in the literature, for instance to generate fast simulations of ultrasound images. However, comparison of the resulting simulations with experiments is either qualitative or based on aggregate descriptors such as envelope statistics or spectral components. In the current work, a planar arrangement of 49-μm polystyrene microspheres was imaged using macrophotography and a 4.7-MHz ultrasound linear array. The macrophotograph allowed estimation of the scattering function (SF) necessary for simulations. Using the coefficient of determination R(2) between real and simulated ultrasound images, different estimates of the SF and point spread function (PSF) were tested. All estimates of the SF performed similarly, whereas the best estimate of the PSF was obtained by Hanningwindowing the deconvolution of the real ultrasound image with the SF: this yielded R(2) = 0.43 for the raw simulated image and R(2) = 0.65 for the envelope-detected ultrasound image. R(2) was highly dependent on microsphere concentration, with values of up to 0.99 for regions with scatterers. The results validate the use of the shift-invariant convolution model for the realistic simulation of ultrasound images. However, care needs to be taken in experiments to reduce the relative effects of other sources of scattering such as from multiple reflections, either by increasing the concentration of imaged scatterers or by more careful experimental design. PMID:26067054
Ellison, David H.
2014-01-01
The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283
Nangini, Cathy; Tam, Fred; Graham, Simon J.
2016-01-01
Characterizing the neurovascular coupling between hemodynamic signals and their neural origins is crucial to functional neuroimaging research, even more so as new methods become available for integrating results from different functional neuroimaging modalities. We present a novel method to relate magnetoencephalography (MEG) and BOLD fMRI data from primary somatosensory cortex within the context of the linear convolution model. This model, which relates neural activity to BOLD signal change, has been widely used to predict BOLD signals but typically lacks experimentally derived measurements of neural activity. In this study, an fMRI experiment is performed using variable-duration (≤1 s) vibrotactile stimuli applied at 22 Hz, analogous to a previously published MEG study (Nangini et al., [2006]: Neuroimage 33:252–262), testing whether MEG source waveforms from the previous study can inform the convolution model and improve BOLD signal estimates across all stimulus durations. The typical formulation of the convolution model in which the input is given by the stimulus profile is referred to as Model 1. Model 2 is based on an energy argument relating metabolic demand to the postsynaptic currents largely responsible for the MEG current dipoles, and uses the energy density of the estimated MEG source waveforms as input to the convolution model. It is shown that Model 2 improves the BOLD signal estimates compared to Model 1 under the experimental conditions implemented, suggesting that MEG energy density can be a useful index of hemodynamic activity. PMID:17290370
Double heterojunction bipolar phototransistor model
NASA Astrophysics Data System (ADS)
Horak, Michal
2003-07-01
An analytical mathematical model of the double heterojunction NpN bipolar phototransistor with abrupt heterojunctions in three terminal configuration is presented. The thermionic-filed emission and diffusion of injected carriers is considered and the Ebers-Moll type relations for the collector and emitter current are obtained. Several steady state characteristics of the phototransistor structure are calculated (optical gain, quantum efficiency, responsivity).
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
NASA Astrophysics Data System (ADS)
Xu, Zhigang
2015-12-01
In this study, a new method of storm surge modeling is proposed. This method is orders of magnitude faster than the traditional method within the linear dynamics framework. The tremendous enhancement of the computational efficiency results from the use of a pre-calculated all-source Green's function (ASGF), which connects a point of interest (POI) to the rest of the world ocean. Once the ASGF has been pre-calculated, it can be repeatedly used to quickly produce a time series of a storm surge at the POI. Using the ASGF, storm surge modeling can be simplified as its convolution with an atmospheric forcing field. If the ASGF is prepared with the global ocean as the model domain, the output of the convolution is free of the effects of artificial open-water boundary conditions. Being the first part of this study, this paper presents mathematical derivations from the linearized and depth-averaged shallow-water equations to the ASGF convolution, establishes various auxiliary concepts that will be useful throughout the study, and interprets the meaning of the ASGF from different perspectives. This paves the way for the ASGF convolution to be further developed as a data-assimilative regression model in part II. Five Appendixes provide additional details about the algorithm and the MATLAB functions.
Search for optimal distance spectrum convolutional codes
NASA Technical Reports Server (NTRS)
Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.
1993-01-01
In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
A Convolutional Subunit Model for Neuronal Responses in Macaque V1
Vintch, Brett; Movshon, J. Anthony
2015-01-01
The response properties of neurons in the early stages of the visual system can be described using the rectified responses of a set of self-similar, spatially shifted linear filters. In macaque primary visual cortex (V1), simple cell responses can be captured with a single filter, whereas complex cells combine a set of filters, creating position invariance. These filters cannot be estimated using standard methods, such as spike-triggered averaging. Subspace methods like spike-triggered covariance can recover multiple filters but require substantial amounts of data, and recover an orthogonal basis for the subspace in which the filters reside, rather than the filters themselves. Here, we assume a linear-nonlinear-linear-nonlinear (LN-LN) cascade model in which the first LN stage consists of shifted (“convolutional”) copies of a single filter, followed by a common instantaneous nonlinearity. We refer to these initial LN elements as the “subunits” of the receptive field, and we allow two independent sets of subunits, each with its own filter and nonlinearity. The second linear stage computes a weighted sum of the subunit responses and passes the result through a final instantaneous nonlinearity. We develop a procedure to directly fit this model to electrophysiological data. When fit to data from macaque V1, the subunit model significantly outperforms three alternatives in terms of cross-validated accuracy and efficiency, and provides a robust, biologically plausible account of receptive field structure for all cell types encountered in V1. SIGNIFICANCE STATEMENT We present a new subunit model for neurons in primary visual cortex that significantly outperforms three alternative models in terms of cross-validated accuracy and efficiency, and provides a robust and biologically plausible account of the receptive field structure in these neurons across the full spectrum of response properties. PMID:26538653
Hunter, Robert W; Ivy, Jessica R; Flatman, Peter W; Kenyon, Christopher J; Craigie, Eilidh; Mullins, Linda J; Bailey, Matthew A; Mullins, John J
2015-07-01
Na(+) transport in the renal distal convoluted tubule (DCT) by the thiazide-sensitive NaCl cotransporter (NCC) is a major determinant of total body Na(+) and BP. NCC-mediated transport is stimulated by aldosterone, the dominant regulator of chronic Na(+) homeostasis, but the mechanism is controversial. Transport may also be affected by epithelial remodeling, which occurs in the DCT in response to chronic perturbations in electrolyte homeostasis. Hsd11b2(-/-) mice, which lack the enzyme 11β-hydroxysteroid dehydrogenase type 2 (11βHSD2) and thus exhibit the syndrome of apparent mineralocorticoid excess, provided an ideal model in which to investigate the potential for DCT hypertrophy to contribute to Na(+) retention in a hypertensive condition. The DCTs of Hsd11b2(-/-) mice exhibited hypertrophy and hyperplasia and the kidneys expressed higher levels of total and phosphorylated NCC compared with those of wild-type mice. However, the striking structural and molecular phenotypes were not associated with an increase in the natriuretic effect of thiazide. In wild-type mice, Hsd11b2 mRNA was detected in some tubule segments expressing Slc12a3, but 11βHSD2 and NCC did not colocalize at the protein level. Thus, the phosphorylation status of NCC may not necessarily equate to its activity in vivo, and the structural remodeling of the DCT in the knockout mouse may not be a direct consequence of aberrant corticosteroid signaling in DCT cells. These observations suggest that the conventional concept of mineralocorticoid signaling in the DCT should be revised to recognize the complexity of NCC regulation by corticosteroids. PMID:25349206
Asymmetric quantum convolutional codes
NASA Astrophysics Data System (ADS)
La Guardia, Giuliano G.
2016-01-01
In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.
Artificial convolution neural network techniques and applications for lung nodule detection.
Lo, S B; Lou, S A; Lin, J S; Freedman, M T; Chien, M V; Mun, S K
1995-01-01
We have developed a double-matching method and an artificial visual neural network technique for lung nodule detection. This neural network technique is generally applicable to the recognition of medical image pattern in gray scale imaging. The structure of the artificial neural net is a simplified network structure of human vision. The fundamental operation of the artificial neural network is local two-dimensional convolution rather than full connection with weighted multiplication. Weighting coefficients of the convolution kernels are formed by the neural network through backpropagated training. In addition, we modeled radiologists' reading procedures in order to instruct the artificial neural network to recognize the image patterns predefined and those of interest to experts in radiology. We have tested this method for lung nodule detection. The performance studies have shown the potential use of this technique in a clinical setting. This program first performed an initial nodule search with high sensitivity in detecting round objects using a sphere template double-matching technique. The artificial convolution neural network acted as a final classifier to determine whether the suspected image block contains a lung nodule. The total processing time for the automatic detection of lung nodules using both prescan and convolution neural network evaluation was about 15 seconds in a DEC Alpha workstation. PMID:18215875
Understanding deep convolutional networks.
Mallat, Stéphane
2016-04-13
Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183
Do a bit more with convolution.
Olsthoorn, Theo N
2008-01-01
Convolution is a form of superposition that efficiently deals with input varying arbitrarily in time or space. It works whenever superposition is applicable, that is, for linear systems. Even though convolution is well-known since the 19th century, this valuable method is still missing in most textbooks on ground water hydrology. This limits widespread application in this field. Perhaps most papers are too complex mathematically as they tend to focus on the derivation of analytical expressions rather than solving practical problems. However, convolution is straightforward with standard mathematical software or even a spreadsheet, as is demonstrated in the paper. The necessary system responses are not limited to analytic solutions; they may also be obtained by running an already existing ground water model for a single stress period until equilibrium is reached. With these responses, high-resolution time series of head or discharge may then be computed by convolution for arbitrary points and arbitrarily varying input, without further use of the model. There are probably thousands of applications in the field of ground water hydrology that may benefit from convolution. Therefore, its inclusion in ground water textbooks and courses is strongly needed. PMID:18181860
Convolution-deconvolution in DIGES
Philippacopoulos, A.J.; Simos, N.
1995-05-01
Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.
On models of double porosity poroelastic media
NASA Astrophysics Data System (ADS)
Boutin, Claude; Royer, Pascale
2015-12-01
This paper focuses on the modelling of fluid-filled poroelastic double porosity media under quasi-static and dynamic regimes. The double porosity model is derived from a two-scale homogenization procedure, by considering a medium locally characterized by blocks of poroelastic Biot microporous matrix and a surrounding system of fluid-filled macropores or fractures. The derived double porosity description is a two-pressure field poroelastic model with memory and viscoelastic effects. These effects result from the `time-dependent' interaction between the pressure fields in the two pore networks. It is shown that this homogenized double porosity behaviour arises when the characteristic time of consolidation in the microporous domain is of the same order of magnitude as the macroscopic characteristic time of transient regime. Conversely, single porosity behaviours occur when both timescales are clearly distinct. Moreover, it is established that the phenomenological approaches that postulate the coexistence of two pressure fields in `instantaneous' interaction only describe media with two pore networks separated by an interface flow barrier. Hence, they fail at predicting and reproducing the behaviour of usual double porosity media. Finally, the results are illustrated for the case of stratified media.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Convolutional coding combined with continuous phase modulation
NASA Technical Reports Server (NTRS)
Pizzi, S. V.; Wilson, S. G.
1985-01-01
Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.
Deep Learning with Hierarchical Convolutional Factor Analysis
Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence
2013-01-01
Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342
Determinate-state convolutional codes
NASA Technical Reports Server (NTRS)
Collins, O.; Hizlan, M.
1991-01-01
A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.
A double pendulum model of tennis strokes
NASA Astrophysics Data System (ADS)
Cross, Rod
2011-05-01
The physics of swinging a tennis racquet is examined by modeling the forearm and the racquet as a double pendulum. We consider differences between a forehand and a serve, and show how they differ from the swing of a bat and a golf club. It is also shown that the swing speed of a racquet, like that of a bat or a club, depends primarily on its moment of inertia rather than on its mass.
Entanglement-assisted quantum convolutional coding
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Double multiple streamtube model with recent improvements
Paraschivoiu, I.; Delclaux, F.
1983-05-01
The objective of the present paper is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.
Double multiple streamtube model with recent improvements
Paraschivoiu, I.; Delclaux, F.
1983-05-01
The objective is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.
Spatio-spectral concentration of convolutions
NASA Astrophysics Data System (ADS)
Hanasoge, Shravan M.
2016-05-01
Differential equations may possess coefficients that vary on a spectrum of scales. Because coefficients are typically multiplicative in real space, they turn into convolution operators in spectral space, mixing all wavenumbers. However, in many applications, only the largest scales of the solution are of interest and so the question turns to whether it is possible to build effective coarse-scale models of the coefficients in such a manner that the large scales of the solution are left intact. Here we apply the method of numerical homogenisation to deterministic linear equations to generate sub-grid-scale models of coefficients at desired frequency cutoffs. We use the Fourier basis to project, filter and compute correctors for the coefficients. The method is tested in 1D and 2D scenarios and found to reproduce the coarse scales of the solution to varying degrees of accuracy depending on the cutoff. We relate this method to mode-elimination Renormalisation Group (RG) and discuss the connection between accuracy and the cutoff wavenumber. The tradeoff is governed by a form of the uncertainty principle for convolutions, which states that as the convolution operator is squeezed in the spectral domain, it broadens in real space. As a consequence, basis sparsity is a high virtue and the choice of the basis can be critical.
Standard Model as a Double Field Theory.
Choi, Kang-Sin; Park, Jeong-Hyuck
2015-10-23
We show that, without any extra physical degree introduced, the standard model can be readily reformulated as a double field theory. Consequently, the standard model can couple to an arbitrary stringy gravitational background in an O(4,4) T-duality covariant manner and manifest two independent local Lorentz symmetries, Spin(1,3)×Spin(3,1). While the diagonal gauge fixing of the twofold spin groups leads to the conventional formulation on the flat Minkowskian background, the enhanced symmetry makes the standard model more rigid, and also stringy, than it appeared. The CP violating θ term may no longer be allowed by the symmetry, and hence the strong CP problem can be solved. There are now stronger constraints imposed on the possible higher order corrections. We speculate that the quarks and the leptons may belong to the two different spin classes. PMID:26551099
Standard Model as a Double Field Theory
NASA Astrophysics Data System (ADS)
Choi, Kang-Sin; Park, Jeong-Hyuck
2015-10-01
We show that, without any extra physical degree introduced, the standard model can be readily reformulated as a double field theory. Consequently, the standard model can couple to an arbitrary stringy gravitational background in an O (4 ,4 ) T -duality covariant manner and manifest two independent local Lorentz symmetries, Spin(1 ,3 )×Spin(3 ,1 ) . While the diagonal gauge fixing of the twofold spin groups leads to the conventional formulation on the flat Minkowskian background, the enhanced symmetry makes the standard model more rigid, and also stringy, than it appeared. The C P violating θ term may no longer be allowed by the symmetry, and hence the strong C P problem can be solved. There are now stronger constraints imposed on the possible higher order corrections. We speculate that the quarks and the leptons may belong to the two different spin classes.
Convolutional Neural Network Based dem Super Resolution
NASA Astrophysics Data System (ADS)
Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang
2016-06-01
DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.
Modeling interconnect corners under double patterning misalignment
NASA Astrophysics Data System (ADS)
Hyun, Daijoon; Shin, Youngsoo
2016-03-01
Publisher's Note: This paper, originally published on March 16th, was replaced with a corrected/revised version on March 28th. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Interconnect corners should accurately reflect the effect of misalingment in LELE double patterning process. Misalignment is usually considered separately from interconnect structure variations; this incurs too much pessimism and fails to reflect a large increase in total capacitance for asymmetric interconnect structure. We model interconnect corners by taking account of misalignment in conjunction with interconnect structure variations; we also characterize misalignment effect more accurately by handling metal pitch at both sides of a target metal independently. Identifying metal space at both sides of a target metal.
Some easily analyzable convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.
1989-01-01
Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.
Convolutional virtual electric field for image segmentation using active contours.
Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden
2014-01-01
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. PMID:25360586
Double Higgs boson production in the models with isotriplets
Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.
2015-12-15
The enhancement of double Higgs boson production in the extensions of the Standard Model with extra isotriplets is studied. It is found that in see-saw type II model decays of new heavy Higgs can contribute to the double Higgs production cross section as much as Standard Model channels. In Georgi–Machacek model the cross section can be much larger since the custodial symmetry is preserved and the strongest limitation on triplet parameters is removed.
Approximating large convolutions in digital images.
Mount, D M; Kanungo, T; Netanyahu, N S; Piatko, C; Silverman, R; Wu, A Y
2001-01-01
Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m x n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. PMID:18255522
The Convolution Method in Neutrino Physics Searches
Tsakstara, V.; Kosmas, T. S.; Chasioti, V. C.; Divari, P. C.; Sinatkas, J.
2007-12-26
We concentrate on the convolution method used in nuclear and astro-nuclear physics studies and, in particular, in the investigation of the nuclear response of various neutrino detection targets to the energy-spectra of specific neutrino sources. Since the reaction cross sections of the neutrinos with nuclear detectors employed in experiments are extremely small, very fine and fast convolution techniques are required. Furthermore, sophisticated de-convolution methods are also needed whenever a comparison between calculated unfolded cross sections and existing convoluted results is necessary.
Generalized Valon Model for Double Parton Distributions
NASA Astrophysics Data System (ADS)
Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof
2016-06-01
We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.
Generalized Valon Model for Double Parton Distributions
NASA Astrophysics Data System (ADS)
Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof
2016-03-01
We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.
21. INTERIOR, DOUBLE STAIRWAY LEADING TO MODEL HALL, DETAIL OF ...
21. INTERIOR, DOUBLE STAIRWAY LEADING TO MODEL HALL, DETAIL OF ONE FLIGHT (5 x 7 negative; 8 x 10 print) - Patent Office Building, Bounded by Seventh, Ninth, F & G Streets, Northwest, Washington, District of Columbia, DC
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
Runge-Kutta based generalized convolution quadrature
NASA Astrophysics Data System (ADS)
Lopez-Fernandez, Maria; Sauter, Stefan
2016-06-01
We present the Runge-Kutta generalized convolution quadrature (gCQ) with variable time steps for the numerical solution of convolution equations for time and space-time problems. We present the main properties of the method and a convergence result.
Symbol synchronization in convolutionally coded systems
NASA Technical Reports Server (NTRS)
Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.
1979-01-01
Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.
Rolling-Convolute Joint For Pressurized Glove
NASA Technical Reports Server (NTRS)
Kosmo, Joseph J.; Bassick, John W.
1994-01-01
Rolling-convolute metacarpal/finger joint enhances mobility and flexibility of pressurized glove. Intended for use in space suit to increase dexterity and decrease wearer's fatigue. Also useful in diving suits and other pressurized protective garments. Two ring elements plus bladder constitute rolling-convolute joint balancing torques caused by internal pressurization of glove. Provides comfortable grasp of various pieces of equipment.
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
Deep learning for steganalysis via convolutional neural networks
NASA Astrophysics Data System (ADS)
Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu
2015-03-01
Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.
A simple pharmacokinetics subroutine for modeling double peak phenomenon.
Mirfazaelian, Ahmad; Mahmoudian, Massoud
2006-04-01
Double peak absorption has been described with several orally administered drugs. Numerous reasons have been implicated in causing the double peak. DRUG-KNT--a pharmacokinetic software developed previously for fitting one and two compartment kinetics using the iterative curve stripping method--was modified and a revised subroutine was incorporated to solve double-peak models. This subroutine considers the double peak as two hypothetical doses administered with a time gap. The fitting capability of the presented model was verified using four sets of data showing double peak profiles extracted from the literature (piroxicam, ranitidine, phenazopyridine and talinolol). Visual inspection and statistical diagnostics showed that the present algorithm provided adequate curve fit disregarding the mechanism involved in the emergence of the secondary peaks. Statistical diagnostic parameters (RSS, AIC and R2) generally showed good fitness in the plasma profile prediction by this model. It was concluded that the algorithm presented herein provides adequate predicted curves in cases of the double peak phenomenon. PMID:16400712
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.
2014-12-08
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
NASA Astrophysics Data System (ADS)
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.
2014-12-01
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator's vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator's vacuum-insulator stack (at a radius of 1.6 m) by using standard D -dot and B -dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator's magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z . These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; et al
2014-12-08
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10
A Unimodal Model for Double Observer Distance Sampling Surveys
Becker, Earl F.; Christ, Aaron M.
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied. PMID:26317984
Double soft theorems and shift symmetry in nonlinear sigma models
NASA Astrophysics Data System (ADS)
Low, Ian
2016-02-01
We show that both the leading and subleading double soft theorems of the nonlinear sigma model follow from a shift symmetry enforcing Adler's zero condition in the presence of an unbroken global symmetry. They do not depend on the underlying coset G /H and are universal infrared behaviors of Nambu-Goldstone bosons. Although nonlinear sigma models contain an infinite number of interaction vertices, the double soft limit is determined entirely by a single four-point interaction, together with the existence of Adler's zeros.
Resonances and period doubling in the pulsations of stellar models
NASA Astrophysics Data System (ADS)
Moskalik, Pawel; Buchler, J. Robert
1990-06-01
The nonlinear pulsational behavior of several sequences of state-of-the-art Cepheid models is computed with a numerical hydrodynamics code. These sequences exhibit period doubling as the control parameter, the effective temperature, is changed. By following the evolution of the Floquet stability coefficients of the periodic pulsations, this period doubling is identified with the destabilization of a vibrational overtone mode through a resonance of the type (2n + 1) omega (0) equal to about 2 omega (k) (n integer). In the weakly dissipative Population I Cepheids, only a single period doubling and subsequent undoubling is observed, whereas in the case of the strongly dissipative Population II Cepheids, a cascade of period doublings and chaos can occur. The basic properties of the period doubling bifurcation are examined within the amplitude equation formalism, leaving little doubt about the resonance origin of the phenomenon. A simple model system to two coupled nonlinear oscillators which mimics the behavior of the complicated stellar models is also analyzed.
Bernoulli convolutions and 1D dynamics
NASA Astrophysics Data System (ADS)
Kempton, Tom; Persson, Tomas
2015-10-01
We describe a family {φλ} of dynamical systems on the unit interval which preserve Bernoulli convolutions. We show that if there are parameter ranges for which these systems are piecewise convex, then the corresponding Bernoulli convolution will be absolutely continuous with bounded density. We study the systems {φλ} and give some numerical evidence to suggest values of λ for which {φλ} may be piecewise convex.
A review of molecular modelling of electric double layer capacitors.
Burt, Ryan; Birkett, Greg; Zhao, X S
2014-04-14
Electric double-layer capacitors are a family of electrochemical energy storage devices that offer a number of advantages, such as high power density and long cyclability. In recent years, research and development of electric double-layer capacitor technology has been growing rapidly, in response to the increasing demand for energy storage devices from emerging industries, such as hybrid and electric vehicles, renewable energy, and smart grid management. The past few years have witnessed a number of significant research breakthroughs in terms of novel electrodes, new electrolytes, and fabrication of devices, thanks to the discovery of innovative materials (e.g. graphene, carbide-derived carbon, and templated carbon) and the availability of advanced experimental and computational tools. However, some experimental observations could not be clearly understood and interpreted due to limitations of traditional theories, some of which were developed more than one hundred years ago. This has led to significant research efforts in computational simulation and modelling, aimed at developing new theories, or improving the existing ones to help interpret experimental results. This review article provides a summary of research progress in molecular modelling of the physical phenomena taking place in electric double-layer capacitors. An introduction to electric double-layer capacitors and their applications, alongside a brief description of electric double layer theories, is presented first. Second, molecular modelling of ion behaviours of various electrolytes interacting with electrodes under different conditions is reviewed. Finally, key conclusions and outlooks are given. Simulations on comparing electric double-layer structure at planar and porous electrode surfaces under equilibrium conditions have revealed significant structural differences between the two electrode types, and porous electrodes have been shown to store charge more efficiently. Accurate electrolyte and
2D quantum double models from a 3D perspective
NASA Astrophysics Data System (ADS)
Bernabé Ferreira, Miguel Jorge; Padmanabhan, Pramod; Teotonio-Sobrinho, Paulo
2014-09-01
In this paper we look at three dimensional (3D) lattice models that are generalizations of the state sum model used to define the Kuperberg invariant of 3-manifolds. The partition function is a scalar constructed as a tensor network where the building blocks are tensors given by the structure constants of an involutory Hopf algebra A. These models are very general and are hard to solve in its entire parameter space. One can obtain familiar models, such as ordinary gauge theories, by letting A be the group algebra {C}(G) of a discrete group G and staying on a certain region of the parameter space. We consider the transfer matrix of the model and show that quantum double Hamiltonians are derived from a particular choice of the parameters. Such a construction naturally leads to the star and plaquette operators of the quantum double Hamiltonians, of which the toric code is a special case when A={C}({{{Z}}_{2}}). This formulation is convenient to study ground states of these generalized quantum double models where they can naturally be interpreted as tensor network states. For a surface Σ, the ground state degeneracy is determined by the Kuperberg 3-manifold invariant of \\Sigma \\times {{S}^{1}}. It is also possible to obtain extra models by simply enlarging the allowed parameter space but keeping the solubility of the model. While some of these extra models have appeared before in the literature, our 3D perspective allows for an uniform description of them.
A Digital Synthesis Model of Double-Reed Wind Instruments
NASA Astrophysics Data System (ADS)
Guillemain, Ph.
2004-12-01
We present a real-time synthesis model for double-reed wind instruments based on a nonlinear physical model. One specificity of double-reed instruments, namely, the presence of a confined air jet in the embouchure, for which a physical model has been proposed recently, is included in the synthesis model. The synthesis procedure involves the use of the physical variables via a digital scheme giving the impedance relationship between pressure and flow in the time domain. Comparisons are made between the behavior of the model with and without the confined air jet in the case of a simple cylindrical bore and that of a more realistic bore, the geometry of which is an approximation of an oboe bore.
The Double Homunculus model of self-reflective systems.
Sawa, Koji; Igamberdiev, Abir U
2016-06-01
Vladimir Lefebvre introduced the principles of self-reflective systems and proposed the model to describe consciousness based on these principles (Lefebvre V.A., 1992, J. Math. Psychol. 36, 100-128). The main feature of the model is an assumption of "the image of the self in the image of the self", that is, "a Double Homunculus". In this study, we further formalize the Lefebvre's formulation by using difference equations for the description of self-reflection. In addition, we also implement a dialogue model between the two homunculus agents. The dialogue models show the necessity of both exchange of information and observation of object. We conclude that the Double Homunculus model represents the most adequate description of conscious systems and has a significant potential for describing interactions of reflective agents in the social environment and their ability to perceive the outside world. PMID:27000722
A Simple Double-Source Model for Interference of Capillaries
ERIC Educational Resources Information Center
Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua
2012-01-01
A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…
Application of the double absorbing boundary condition in seismic modeling
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Xiang-Yang; Chen, Shuang-Quan
2015-03-01
We apply the newly proposed double absorbing boundary condition (DABC) (Hagstrom et al., 2014) to solve the boundary reflection problem in seismic finite-difference (FD) modeling. In the DABC scheme, the local high-order absorbing boundary condition is used on two parallel artificial boundaries, and thus double absorption is achieved. Using the general 2D acoustic wave propagation equations as an example, we use the DABC in seismic FD modeling, and discuss the derivation and implementation steps in detail. Compared with the perfectly matched layer (PML), the complexity decreases, and the stability and flexibility improve. A homogeneous model and the SEG salt model are selected for numerical experiments. The results show that absorption using the DABC is considerably improved relative to the Clayton-Engquist boundary condition and nearly the same as that in the PML.
Modeling of electrochemical double layers in thermodynamic non-equilibrium.
Dreyer, Wolfgang; Guhlke, Clemens; Müller, Rüdiger
2015-10-28
We consider the contact between an electrolyte and a solid electrode. At first we formulate a thermodynamic consistent model that resolves boundary layers at interfaces. The model includes charge transport, diffusion, chemical reactions, viscosity, elasticity and polarization under isothermal conditions. There is a coupling between these phenomena that particularly involves the local pressure in the electrolyte. Therefore the momentum balance is of major importance for the correct description of the boundary layers. The width of the boundary layers is typically very small compared to the macroscopic dimensions of the system. In the second step we thus apply the method of asymptotic analysis to derive a simpler reduced bulk model that already incorporates the electrochemical properties of the double layers into a set of new boundary conditions. With the reduced model, we analyze the double layer capacitance for a metal-electrolyte interface. PMID:26415592
Multilabel Image Annotation Based on Double-Layer PLSA Model
Zhang, Jing; Li, Da; Hu, Weiwei; Chen, Zhihua; Yuan, Yubo
2014-01-01
Due to the semantic gap between visual features and semantic concepts, automatic image annotation has become a difficult issue in computer vision recently. We propose a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA) in this paper. The new double-layer PLSA model is constructed to bridge the low-level visual features and high-level semantic concepts of images for effective image understanding. The low-level features of images are represented as visual words by Bag-of-Words model; latent semantic topics are obtained by the first layer PLSA from two aspects of visual and texture, respectively. Furthermore, we adopt the second layer PLSA to fuse the visual and texture latent semantic topics and achieve a top-layer latent semantic topic. By the double-layer PLSA, the relationships between visual features and semantic concepts of images are established, and we can predict the labels of new images by their low-level features. Experimental results demonstrate that our automatic image annotation model based on double-layer PLSA can achieve promising performance for labeling and outperform previous methods on standard Corel dataset. PMID:24999490
UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.
A simple double-source model for interference of capillaries
NASA Astrophysics Data System (ADS)
Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua
2012-01-01
A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An inverse proportionality between the fringes spacing and the capillary radius is derived based on the simple double-source model. This can provide an efficient and precise method to measure a small capillary diameter of micrometre scale. This model could be useful because it presents a fresh perspective on the diffraction of light from a particular geometry (transparent cylinder), which is not straightforward for undergraduates. It also offers an alternative interferometer to perform a different type of measurement, especially for using virtual sources.
Shell model predictions for 124Sn double-β decay
NASA Astrophysics Data System (ADS)
Horoi, Mihai; Neacsu, Andrei
2016-02-01
Neutrinoless double-β (0 ν β β ) decay is a promising beyond standard model process. Two-neutrino double-β (2 ν β β ) decay is an associated process that is allowed by the standard model, and it was observed in about 10 isotopes, including decays to the excited states of the daughter. 124Sn was the first isotope whose double-β decay modes were investigated experimentally, and despite few other recent efforts, no signal has been seen so far. Shell model calculations were able to make reliable predictions for 2 ν β β decay half-lives. Here we use shell model calculations to predict the 2 ν β β decay half-life of 124Sn. Our results are quite different from the existing quasiparticle random-phase approximation results, and we envision that they will be useful for guiding future experiments. We also present shell model nuclear matrix elements for two potentially competing mechanisms to the 0 ν β β decay of 124Sn.
Non-commutativity from the double sigma model
NASA Astrophysics Data System (ADS)
Polyakov, Dimitri; Wang, Peng; Wu, Houwen; Yang, Haitang
2015-03-01
We show how non-commutativity arises from commutativity in the double sigma model. We demonstrate that this model is intrinsically non-commutative by calculating the propagators. In the simplest phase configuration, there are two dual copies of commutative theories. In general rotated frames, one gets a non-commutative theory and a commutative partner. Thus a non-vanishing B also leads to a commutative theory. Our results imply that O( D, D) symmetry unifies not only the big and small torus physics, but also the commutative and non-commutative theories. The physical interpretations of the metric and other parameters in the double sigma model are completely dictated by the boundary conditions. The open-closed relation is also an O( D, D) rotation and naturally leads to the Seiberg-Witten map. Moreover, after applying a second dual rotation, we identify the description parameter in the Seiberg-Witten map as an O( D, D) group parameter and all theories are non-commutative under this composite rotation. As a bonus, the propagators of general frames in double sigma model for open string are also presented.
Double scaling in tensor models with a quartic interaction
NASA Astrophysics Data System (ADS)
Dartois, Stéphane; Gurau, Razvan; Rivasseau, Vincent
2013-09-01
In this paper we identify and analyze in detail the subleading contributions in the 1 /N expansion of random tensors, in the simple case of a quartically interacting model. The leading order for this 1 /N expansion is made of graphs, called melons, which are dual to particular triangulations of the D-dimensional sphere, closely related to the "stacked" triangulations. For D < 6 the subleading behavior is governed by a larger family of graphs, hereafter called cherry trees, which are also dual to the D-dimensional sphere. They can be resummed explicitly through a double scaling limit. In sharp contrast with random matrix models, this double scaling limit is stable. Apart from its unexpected upper critical dimension 6, it displays a singularity at fixed distance from the origin and is clearly the first step in a richer set of yet to be discovered multi-scaling limits.
On the growth and form of cortical convolutions
NASA Astrophysics Data System (ADS)
Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.
2016-06-01
The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.
Double porosity modeling in elastic wave propagation for reservoir characterization
Berryman, J. G., LLNL
1998-06-01
Phenomenological equations for the poroelastic behavior of a double porosity medium have been formulated and the coefficients in these linear equations identified. The generalization from a single porosity model increases the number of independent coefficients from three to six for an isotropic applied stress. In a quasistatic analysis, the physical interpretations are based upon considerations of extremes in both spatial and temporal scales. The limit of very short times is the one most relevant for wave propagation, and in this case both matrix porosity and fractures behave in an undrained fashion. For the very long times more relevant for reservoir drawdown,the double porosity medium behaves as an equivalent single porosity medium At the macroscopic spatial level, the pertinent parameters (such as the total compressibility) may be determined by appropriate field tests. At the mesoscopic scale pertinent parameters of the rock matrix can be determined directly through laboratory measurements on core, and the compressibility can be measured for a single fracture. We show explicitly how to generalize the quasistatic results to incorporate wave propagation effects and how effects that are usually attributed to squirt flow under partially saturated conditions can be explained alternatively in terms of the double-porosity model. The result is therefore a theory that generalizes, but is completely consistent with, Biot`s theory of poroelasticity and is valid for analysis of elastic wave data from highly fractured reservoirs.
Two potential quark models for double heavy baryons
NASA Astrophysics Data System (ADS)
Puchkov, A. M.; Kozhedub, A. V.
2016-01-01
Baryons containing two heavy quarks (QQ' q) are treated in the Born-Oppenheimer approximation. Two non-relativistic potential models are proposed, in which the Schrödinger equation admits a separation of variables in prolate and oblate spheroidal coordinates, respectively. In the first model, the potential is equal to the sum of Coulomb potentials of the two heavy quarks, separated from each other by a distance - R and linear potential of confinement. In the second model the center distance parameter R is assumed to be purely imaginary. In this case, the potential is defined by the two-sheeted mapping with singularities being concentrated on a circle rather than at separate points. Thus, in the first model diquark appears as a segment, and in the second - as a circle. In this paper we calculate the mass spectrum of double heavy baryons in both models, and compare it with previous results.
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
Number-Theoretic Functions via Convolution Rings.
ERIC Educational Resources Information Center
Berberian, S. K.
1992-01-01
Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)
Continuous speech recognition based on convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong
2015-07-01
Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.
About closedness by convolution of the Tsallis maximizers
NASA Astrophysics Data System (ADS)
Vignat, C.; Hero, A. O., III; Costa, J. A.
2004-09-01
In this paper, we study the stability under convolution of the maximizing distributions of the Tsallis entropy under energy constraint (called hereafter Tsallis distributions). These distributions are shown to obey three important properties: a stochastic representation property, an orthogonal invariance property and a duality property. As a consequence of these properties, the behavior of Tsallis distributions under convolution is characterized. At last, a special random convolution, called Kingman convolution, is shown to ensure the stability of Tsallis distributions.
Experience in calibrating the double-hardening constitutive model Monot
NASA Astrophysics Data System (ADS)
Hicks, M. A.
2003-11-01
The Monot double-hardening soil model has previously been implemented within a general purpose finite element algorithm, and used in the analysis of numerous practical problems. This paper reviews experience gained in calibrating Monot to laboratory data and demonstrates how the calibration process may be simplified without detriment to the range of behaviours modelled. It describes Monot's principal features, important governing equations and various calibration methods, including strategies for overconsolidated, cemented and cohesive soils. Based on a critical review of over 30 previous Monot calibrations, for sands and other geomaterials, trends in parameter values have been identified, enabling parameters to be categorized according to their relative importance. It is shown that, for most practical purposes, a maximum of only 5 parameters is needed; for the remaining parameters, standard default values are suggested. Hence, the advanced stress-strain modelling offered by Monot is attainable with a similar number of parameters as would be needed for some simpler, less versatile, models. Copyright
Investigating GPDs in the framework of the double distribution model
NASA Astrophysics Data System (ADS)
Nazari, F.; Mirjalili, A.
2016-06-01
In this paper, we construct the generalized parton distribution (GPD) in terms of the kinematical variables x, ξ, t, using the double distribution model. By employing these functions, we could extract some quantities which makes it possible to gain a three-dimensional insight into the nucleon structure function at the parton level. The main objective of GPDs is to combine and generalize the concepts of ordinary parton distributions and form factors. They also provide an exclusive framework to describe the nucleons in terms of quarks and gluons. Here, we first calculate, in the Double Distribution model, the GPD based on the usual parton distributions arising from the GRV and CTEQ phenomenological models. Obtaining quarks and gluons angular momenta from the GPD, we would be able to calculate the scattering observables which are related to spin asymmetries of the produced quarkonium. These quantities are represented by AN and ALS. We also calculate the Pauli and Dirac form factors in deeply virtual Compton scattering. Finally, in order to compare our results with the existing experimental data, we use the difference of the polarized cross-section for an initial longitudinal leptonic beam and unpolarized target particles (ΔσLU). In all cases, our obtained results are in good agreement with the available experimental data.
Three-Triplet Model with Double SU(3) Symmetry
DOE R&D Accomplishments Database
Han, M. Y.; Nambu, Y.
1965-01-01
With a view to avoiding some of the kinematical and dynamical difficulties involved in the single triplet quark model, a model for the low lying baryons and mesons based on three triplets with integral charges is proposed, somewhat similar to the two-triplet model introduced earlier by one of us (Y. N.). It is shown that in a U(3) scheme of triplets with integral charges, one is naturally led to three triplets located symmetrically about the origin of I{sub 3} - Y diagram under the constraint that Nishijima-Gell-Mann relation remains intact. A double SU(3) symmetry scheme is proposed in which the large mass splittings between different representations are ascribed to one of the SU(3), while the other SU(3) is the usual one for the mass splittings within a representation of the first SU(3).
Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi
2016-07-01
Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.
Is turbulent mixing a self-convolution process?
Venaille, Antoine; Sommeria, Joel
2008-06-13
Experimental results for the evolution of the probability distribution function (PDF) of a scalar mixed by a turbulent flow in a channel are presented. The sequence of PDF from an initial skewed distribution to a sharp Gaussian is found to be nonuniversal. The route toward homogeneization depends on the ratio between the cross sections of the dye injector and the channel. In connection with this observation, advantages, shortcomings, and applicability of models for the PDF evolution based on a self-convolution mechanism are discussed. PMID:18643510
Double-multiple streamtube model for Darrieus in turbines
NASA Technical Reports Server (NTRS)
Paraschivoiu, I.
1981-01-01
An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.
Convolutional neural network architectures for predicting DNA–protein binding
Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.
2016-01-01
Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608
A convolutional neural network neutrino event classifier
Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.
2016-09-01
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
Quantum convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng
2014-12-01
In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.
Satellite image classification using convolutional learning
NASA Astrophysics Data System (ADS)
Nguyen, Thao; Han, Jiho; Park, Dong-Chul
2013-10-01
A satellite image classification method using Convolutional Neural Network (CNN) architecture is proposed in this paper. As a special case of deep learning, CNN classifies classes of images without any feature extraction step while other existing classification methods utilize rather complex feature extraction processes. Experiments on a set of satellite image data and the preliminary results show that the proposed classification method can be a promising alternative over existing feature extraction-based schemes in terms of classification accuracy and classification speed.
Blind Identification of Convolutional Encoder Parameters
Su, Shaojing; Zhou, Jing; Huang, Zhiping; Liu, Chunwu; Zhang, Yimeng
2014-01-01
This paper gives a solution to the blind parameter identification of a convolutional encoder. The problem can be addressed in the context of the noncooperative communications or adaptive coding and modulations (ACM) for cognitive radio networks. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary convolutional codes, while the coding parameters are unknown. Some previous literatures have significant contributions for the recognition of convolutional encoder parameters in hard-decision situations. However, soft-decision systems are applied more and more as the improvement of signal processing techniques. In this paper we propose a method to utilize the soft information to improve the recognition performances in soft-decision communication systems. Besides, we propose a new recognition method based on correlation attack to meet low signal-to-noise ratio situations. Finally we give the simulation results to show the efficiency of the proposed methods. PMID:24982997
Deep Convolutional Neural Networks for large-scale speech tasks.
Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana
2015-04-01
Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks. PMID:25439765
A hybrid double-observer sightability model for aerial surveys
Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine
2013-01-01
Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.
Multiple deep convolutional neural networks averaging for face alignment
NASA Astrophysics Data System (ADS)
Zhang, Shaohua; Yang, Hua; Yin, Zhouping
2015-05-01
Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.
Classification of Histology Sections via Multispectral Convolutional Sparse Coding*
Zhou, Yin; Barner, Kenneth; Spellman, Paul
2014-01-01
Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]). PMID:25554749
2012-01-01
Background Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical reconstructions using sinc-convolution
Convolutional Neural Network Based Fault Detection for Rotating Machinery
NASA Astrophysics Data System (ADS)
Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie
2016-09-01
Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline
Convolution neural networks for ship type recognition
NASA Astrophysics Data System (ADS)
Rainey, Katie; Reeder, John D.; Corelli, Alexander G.
2016-05-01
Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.
``Quasi-complete'' mechanical model for a double torsion pendulum
NASA Astrophysics Data System (ADS)
De Marchi, Fabrizio; Pucacco, Giuseppe; Bassan, Massimo; De Rosa, Rosario; Di Fiore, Luciano; Garufi, Fabio; Grado, Aniello; Marconi, Lorenzo; Stanga, Ruggero; Stolzi, Francesco; Visco, Massimo
2013-06-01
We present a dynamical model for the double torsion pendulum nicknamed “PETER,” where one torsion pendulum hangs in cascade, but off axis, from the other. The dynamics of interest in these devices lies around the torsional resonance, that is at very low frequencies (mHz). However, we find that, in order to properly describe the forced motion of the pendulums, also other modes must be considered, namely swinging and bouncing oscillations of the two suspended masses, that resonate at higher frequencies (Hz). Although the system has obviously 6+6 degrees of freedom, we find that 8 are sufficient for an accurate description of the observed motion. This model produces reliable estimates of the response to generic external disturbances and actuating forces or torques. In particular, we compute the effect of seismic floor motion (“tilt” noise) on the low frequency part of the signal spectra and show that it properly accounts for most of the measured low frequency noise.
Geometric multi-resolution analysis and data-driven convolutions
NASA Astrophysics Data System (ADS)
Strawn, Nate
2015-09-01
We introduce a procedure for learning discrete convolutional operators for generic datasets which recovers the standard block convolutional operators when applied to sets of natural images. They key observation is that the standard block convolutional operators on images are intuitive because humans naturally understand the grid structure of the self-evident functions over images spaces (pixels). This procedure first constructs a Geometric Multi-Resolution Analysis (GMRA) on the set of variables giving rise to a dataset, and then leverages the details of this data structure to identify subsets of variables upon which convolutional operators are supported, as well as a space of functions that can be shared coherently amongst these supports.
Shell model nuclear matrix elements for competing mechanisms contributing to double beta decay
Horoi, Mihai
2013-12-30
Recent progress in the shell model approach to the nuclear matrix elements for the double beta decay process are presented. This includes nuclear matrix elements for competing mechanisms to neutrionless double beta decay, a comparison between closure and non-closure approximation for {sup 48}Ca, and an updated shell model analysis of nuclear matrix elements for the double beta decay of {sup 136}Xe.
Convolution Inequalities for the Boltzmann Collision Operator
NASA Astrophysics Data System (ADS)
Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.
2010-09-01
We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.
Convolutional fountain distribution over fading wireless channels
NASA Astrophysics Data System (ADS)
Usman, Mohammed
2012-08-01
Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.
Convolution formulations for non-negative intensity.
Williams, Earl G
2013-08-01
Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space. PMID:23927105
Applying the Post-Modern Double ABC-X Model to Family Food Insecurity
ERIC Educational Resources Information Center
Hutson, Samantha; Anderson, Melinda; Swafford, Melinda
2015-01-01
This paper develops the argument that using the Double ABC-X model in family and consumer sciences (FCS) curricula is a way to educate nutrition and dietetics students regarding a family's perceptions of food insecurity. The Double ABC-X model incorporates ecological theory as a basis to explain family stress and the resulting adjustment and…
Perez-Luna, J.; Hagelaar, G. J. M.; Garrigues, L.; Boeuf, J. P.
2007-11-15
A hybrid fluid-particle model has been used to study the properties of a double-stage Hall effect thruster where the channel is divided into two regions of large magnetic field separated by a low-field region containing an intermediate, electron-emitting electrode. These two features are aimed at effectively separating the ionization region from the acceleration region in order to extend the thruster operating range. Simulation results are compared with experimental results obtained elsewhere. The simulations reproduce some of the measurements when the anomalous transport coefficients are adequately chosen. However, they raise the question of a complete separation of the ionization and acceleration regions and the necessity of an electron-emissive intermediate electrode. The calculation method for the electric potential in the hybrid model has been improved with respect to our previous work and is capable of a complete two-dimensional description of the magnetic configurations of double-stage Hall effect thrusters.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Some partial-unit-memory convolutional codes
NASA Technical Reports Server (NTRS)
Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.
1991-01-01
The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
Bacterial colony counting by Convolutional Neural Networks.
Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto
2015-08-01
Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications. PMID:26738016
Improved double-multiple streamtube model for the Darrieus-type vertical-axis wind turbine
Berg, D.E.
1983-01-01
Double streamtube codes model the curved blade (Darrieus-type) vertical-axis wind turbine (VAWT) as a double actuator-disk arrangement (one disk for the upwind half of the rotor and a second disk for the downwind half) and use conservation of momentum principles to determine the forces acting on the turbine blades and the turbine performance. These models differentiate between the upwind and downwind sections of the rotor and are capable of determining blade loading more accurately than the widely-used single-actuator-disk streamtube models. Additional accuracy may be obtained by representing the turbine as a collection of several streamtubes, each of which is modeled as a double actuator disk. This is referred to as the double-multiple-streamtube model. Sandia National Laboratories has developed a double-multiple streamtube model for the VAWT which incorporates the effects of the incident wind boundary layer, nonuniform velocity between the upwind and downwind sections of the rotor, dynamic stall effects and local blade Reynolds number variations. This paper presents the theory underlying this VAWT model and describes the code capabilities. Code results are compared with experimental data from two VAWT's and with the results from another double-multiple-streamtube and a vortex-filament code. The effects of neglecting dynamic stall and horizontal wind-velocity distribution are also illustrated.
Blind source separation of convolutive mixtures
NASA Astrophysics Data System (ADS)
Makino, Shoji
2006-04-01
This paper introduces the blind source separation (BSS) of convolutive mixtures of acoustic signals, especially speech. A statistical and computational technique, called independent component analysis (ICA), is examined. By achieving nonlinear decorrelation, nonstationary decorrelation, or time-delayed decorrelation, we can find source signals only from observed mixed signals. Particular attention is paid to the physical interpretation of BSS from the acoustical signal processing point of view. Frequency-domain BSS is shown to be equivalent to two sets of frequency domain adaptive microphone arrays, i.e., adaptive beamformers (ABFs). Although BSS can reduce reverberant sounds to some extent in the same way as ABF, it mainly removes the sounds from the jammer direction. This is why BSS has difficulties with long reverberation in the real world. If sources are not "independent," the dependence results in bias noise when obtaining the correct separation filter coefficients. Therefore, the performance of BSS is limited by that of ABF. Although BSS is upper bounded by ABF, BSS has a strong advantage over ABF. BSS can be regarded as an intelligent version of ABF in the sense that it can adapt without any information on the array manifold or the target direction, and sources can be simultaneously active in BSS.
Accelerated unsteady flow line integral convolution.
Liu, Zhanping; Moorhead, Robert J
2005-01-01
Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality. PMID:15747635
Metaheuristic Algorithms for Convolution Neural Network.
Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Metaheuristic Algorithms for Convolution Neural Network
Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
A 3D Model of Double-Helical DNA Showing Variable Chemical Details
ERIC Educational Resources Information Center
Cady, Susan G.
2005-01-01
Since the first DNA model was created approximately 50 years ago using molecular models, students and teachers have been building simplified DNA models from various practical materials. A 3D double-helical DNA model, made by placing beads on a wire and stringing beads through holes in plastic canvas, is described. Suggestions are given to enhance…
NASA Technical Reports Server (NTRS)
Asbury, Scott C.; Hunter, Craig A.
1999-01-01
An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.
Computational modeling of electrophotonics nanomaterials: Tunneling in double quantum dots
Vlahovic, Branislav Filikhin, Igor
2014-10-06
Single electron localization and tunneling in double quantum dots (DQD) and rings (DQR) and in particular the localized-delocalized states and their spectral distributions are considered in dependence on the geometry of the DQDs (DQRs). The effect of violation of symmetry of DQDs geometry on the tunneling is studied in details. The cases of regular and chaotic geometries are considered. It will be shown that a small violation of symmetry drastically affects localization of electron and that anti-crossing of the levels is the mechanism of tunneling between the localized and delocalized states in DQRs.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. PMID:26700535
Double-expansion impurity solver for multiorbital models with dynamically screened U and J
NASA Astrophysics Data System (ADS)
Steiner, Karim; Nomura, Yusuke; Werner, Philipp
2015-09-01
We present a continuous-time Monte Carlo impurity solver for multiorbital impurity models which combines a strong-coupling hybridization expansion and a weak-coupling expansion in the Hund's coupling parameter J . This double-expansion approach allows to treat the dominant density-density interactions U within the efficient segment representation. We test the approach for a two-orbital model with static interactions, and then explain how the double expansion allows to simulate models with frequency dependent U (ω ) and J (ω ) . The method is used to investigate spin-state transitions in a toy model for fullerides, with repulsive bare J but attractive screened J .
Strong coupling theory for electron-mediated interactions in double-exchange models
NASA Astrophysics Data System (ADS)
Ishizuka, Hiroaki; Motome, Yukitoshi
2015-07-01
We present a theoretical framework for evaluating effective interactions between localized spins mediated by itinerant electrons in double-exchange models. Performing the expansion with respect to the spin-dependent part of the electron hopping terms, we show a systematic way of constructing the effective spin model in the large Hund's coupling limit. As a benchmark, we examine the accuracy of this method by comparing the results with the numerical solutions for the spin-ice type model on a pyrochlore lattice. We also discuss an extension of the method to the double-exchange models with Heisenberg and X Y localized spins.
Coupled cluster Green function: Model involving single and double excitations
NASA Astrophysics Data System (ADS)
Bhaskaran-Nair, Kiran; Kowalski, Karol; Shelton, William A.
2016-04-01
In this paper, we report on the development of a parallel implementation of the coupled-cluster (CC) Green function formulation (GFCC) employing single and double excitations in the cluster operator (GFCCSD). A key aspect of this work is the determination of the frequency dependent self-energy, Σ(ω). The detailed description of the underlying algorithm is provided, including approximations used that preserve the pole structure of the full GFCCSD method, thereby reducing the computational costs while maintaining an accurate character of methodology. Furthermore, for systems with strong local correlation, our formulation reveals a diagonally dominate block structure where as the non-local correlation increases, the block size increases proportionally. To demonstrate the accuracy of our approach, several examples including calculations of ionization potentials for benchmark systems are presented and compared against experiment.
Modeling and simulation of a double auction artificial financial market
NASA Astrophysics Data System (ADS)
Raberto, Marco; Cincotti, Silvano
2005-09-01
We present a double-auction artificial financial market populated by heterogeneous agents who trade one risky asset in exchange for cash. Agents issue random orders subject to budget constraints. The limit prices of orders may depend on past market volatility. Limit orders are stored in the book whereas market orders give immediate birth to transactions. We show that fat tails and volatility clustering are recovered by means of very simple assumptions. We also investigate two important stylized facts of the limit order book, i.e., the distribution of waiting times between two consecutive transactions and the instantaneous price impact function. We show both theoretically and through simulations that if the order waiting times are exponentially distributed, even trading waiting times are also exponentially distributed.
Coupled cluster Green function: Model involving single and double excitations.
Bhaskaran-Nair, Kiran; Kowalski, Karol; Shelton, William A
2016-04-14
In this paper, we report on the development of a parallel implementation of the coupled-cluster (CC) Green function formulation (GFCC) employing single and double excitations in the cluster operator (GFCCSD). A key aspect of this work is the determination of the frequency dependent self-energy, Σ(ω). The detailed description of the underlying algorithm is provided, including approximations used that preserve the pole structure of the full GFCCSD method, thereby reducing the computational costs while maintaining an accurate character of methodology. Furthermore, for systems with strong local correlation, our formulation reveals a diagonally dominate block structure where as the non-local correlation increases, the block size increases proportionally. To demonstrate the accuracy of our approach, several examples including calculations of ionization potentials for benchmark systems are presented and compared against experiment. PMID:27083702
Lifetime of double occupancies in the Fermi-Hubbard model
Sensarma, Rajdeep; Pekker, David; Demler, Eugene; Altman, Ehud; Strohmaier, Niels; Moritz, Henning; Greif, Daniel; Joerdens, Robert; Tarruell, Leticia; Esslinger, Tilman
2010-12-01
We investigate the decay of artificially created double occupancies in a repulsive Fermi-Hubbard system in the strongly interacting limit using diagrammatic many-body theory and experiments with ultracold fermions in optical lattices. The lifetime of the doublons is found to scale exponentially with the ratio of the on-site repulsion to the bandwidth. We show that the dominant decay process in presence of background holes is the excitation of a large number of particle-hole pairs to absorb the energy of the doublon. We also show that the strongly interacting nature of the background state is crucial in obtaining the correct estimate of the doublon lifetime in these systems. The theoretical estimates and the experimental data are in agreement.
Chu, Yizhuo; Wang, Dongxing; Zhu, Wenqi; Crozier, Kenneth B
2011-08-01
The strong coupling between localized surface plasmons and surface plasmon polaritons in a double resonance surface enhanced Raman scattering (SERS) substrate is described by a classical coupled oscillator model. The effects of the particle density, the particle size and the SiO2 spacer thickness on the coupling strength are experimentally investigated. We demonstrate that by tuning the geometrical parameters of the double resonance substrate, we can readily control the resonance frequencies and tailor the SERS enhancement spectrum. PMID:21934853
Semileptonic decays of double heavy baryons in a relativistic constituent three-quark model
Faessler, Amand; Gutsche, Thomas; Lyubovitskij, Valery E.; Ivanov, Mikhail A.; Koerner, Juergen G.
2009-08-01
We study the semileptonic decays of double-heavy baryons using a manifestly Lorentz covariant constituent three-quark model. We present complete results on transition form factors between double-heavy baryons for finite values of the heavy quark/baryon masses and in the heavy quark symmetry limit, which is valid at and close to zero recoil. Decay rates are calculated and compared to each other in the full theory, keeping masses finite, and also in the heavy quark limit.
Gao Yajun
2008-08-15
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled
A double species model for study of relaxation of impure Ni 3Al grain boundaries
NASA Astrophysics Data System (ADS)
Zheng, Li-Ping; Ma, Yu-Gang; Han, Jia-Guang; Li, D. X.; Zhang, Xiu-Rong
2004-04-01
Based on the Monte Carlo simulation conjoined with the embedded atom method (EAM) potentials, the double species model is established to study relaxation of impure Ni 3Al grain boundaries. The present double species model suggests that the impure atoms are not only segregating species but also inducing species. The present model also suggests that study of combination of the positive (the impure atoms induce Ni atoms to substitute into Al sites) and the negative (the impure atoms substitute into Ni sites) bulk effects of impure atoms is available, in order to understand dependence of the cohesion of the impure Ni 3Al grain boundary (or the Ni enrichment at the impure Ni 3Al grain boundary) on the bulk concentration of impure atoms. The double species model is clarified in comparison between the Ni 3AlB and the Ni 3AlMg systems.
Quantum model for double ionization of atoms in strong laser fields
NASA Astrophysics Data System (ADS)
Prauzner-Bechcicki, Jakub S.; Sacha, Krzysztof; Eckhardt, Bruno; Zakrzewski, Jakub
2008-07-01
We discuss double ionization of atoms in strong laser pulses using a reduced dimensionality model. Following the insight obtained from an analysis of the classical mechanics of the process, we confine each electron to move along the lines that point towards the two-particle Stark saddle in the presence of a field. The resulting effective two-dimensional model is similar to the aligned electron model, but it enables correlated escape of electrons with equal momenta, as observed experimentally. The time-dependent solution of the Schrödinger equation allows us to discuss in detail the time dynamics of the ionization process, the formation of electronic wave packets, and the development of the momentum distribution of the outgoing electrons. In particular, we are able to identify the rescattering process, simultaneous direct double ionization during the same field cycle, as well as other double ionization processes. We also use the model to study the phase dependence of the ionization process.
Boundary conditions and the generalized metric formulation of the double sigma model
NASA Astrophysics Data System (ADS)
Ma, Chen-Te
2015-09-01
Double sigma model with strong constraints is equivalent to the ordinary sigma model by imposing a self-duality relation. The gauge symmetries are the diffeomorphism and one-form gauge transformation with the strong constraints. We consider boundary conditions in the double sigma model from three ways. The first way is to modify the Dirichlet and Neumann boundary conditions with a fully O (D, D) description from double gauge fields. We perform the one-loop β function for the constant background fields to find low-energy effective theory without using the strong constraints. The low-energy theory can also have O (D, D) invariance as the double sigma model. The second way is to construct different boundary conditions from the projectors. The third way is to combine the antisymmetric background field with field strength to redefine an O (D, D) generalized metric. We use this generalized metric to reconstruct a consistent double sigma model with the classical and quantum equivalence.
Simulations of the flow past a cylinder using an unsteady double wake model
NASA Astrophysics Data System (ADS)
Ramos-García, N.; Sarlak, H.; Andersen, S. J.; Sørensen, J. N.
2016-06-01
In the present work, the in-house UnSteady Double Wake Model (USDWM) is used to simulate flows past a cylinder at subcritical, supercritical, and transcritical Reynolds numbers. The flow model is a two-dimensional panel method which uses the unsteady double wake technique to model flow separation and its dynamics. In the present work the separation location is obtained from experimental data and fixed in time. The highly unsteady flow field behind the cylinder is analyzed in detail, comparing the vortex shedding charactericts under the different flow conditions.
Dynamic modelling of a double-pendulum gantry crane system incorporating payload
Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.
2011-06-20
The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.
Dynamic Modelling of a Double-Pendulum Gantry Crane System Incorporating Payload
NASA Astrophysics Data System (ADS)
Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.
2011-06-01
The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.
Output-sensitive 3D line integral convolution.
Falk, Martin; Weiskopf, Daniel
2008-01-01
We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance
Flexible algorithm for real-time convolution supporting dynamic event-related fMRI
NASA Astrophysics Data System (ADS)
Eaton, Brent L.; Frank, Randall J.; Bolinger, Lizann; Grabowski, Thomas J.
2002-04-01
An efficient algorithm for generation of the task reference function has been developed that allows real-time statistical analysis of fMRI data, within the framework of the general linear model, for experiments with event-related stimulus designs. By leveraging time-stamped data collection in the Input/Output time-aWare Architecture (I/OWA), we detect the onset time of a stimulus as it is delivered to a subject. A dynamically updated list of detected stimulus event times is maintained in shared memory as a data stream and delivered as input to a real-time convolution algorithm. As each image is acquired from the MR scanner, the time-stamp of its acquisition is delivered via a second dynamically updated stream to the convolution algorithm, where a running convolution of the events with an estimated hemodynamic response function is computed at the image acquisition time and written to a third stream in memory. Output is interpreted as the activation reference function and treated as the covariate of interest in the I/OWA implementation of the general linear model. Statistical parametric maps are computed and displayed to the I/OWA user interface in less than the time between successive image acquisitions.
Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions
NASA Astrophysics Data System (ADS)
Sutter, P. M.; Wandelt, B. D.; Elsner, F.
2015-06-01
We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.
Double time lag combustion instability model for bipropellant rocket engines
NASA Technical Reports Server (NTRS)
Liu, C. K.
1973-01-01
A bipropellant stability model is presented in which feed system inertance and capacitance are treated along with injection pressure drop and distinctly different propellant time lags. The model is essentially an extension of Crocco's and Cheng's monopropellant model to the bipropellant case assuming that the feed system inertance and capacitance along with the resistance are located at the injector. The neutral stability boundaries are computed in terms of these parameters to demonstrate the interaction among them.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Innervation of the renal proximal convoluted tubule of the rat
Barajas, L.; Powers, K. )
1989-12-01
Experimental data suggest the proximal tubule as a major site of neurogenic influence on tubular function. The functional and anatomical axial heterogeneity of the proximal tubule prompted this study of the distribution of innervation sites along the early, mid, and late proximal convoluted tubule (PCT) of the rat. Serial section autoradiograms, with tritiated norepinephrine serving as a marker for monoaminergic nerves, were used in this study. Freehand clay models and graphic reconstructions of proximal tubules permitted a rough estimation of the location of the innervation sites along the PCT. In the subcapsular nephrons, the early PCT (first third) was devoid of innervation sites with most of the innervation occurring in the mid (middle third) and in the late (last third) PCT. Innervation sites were found in the early PCT in nephrons located deeper in the cortex. In juxtamedullary nephrons, innervation sites could be observed on the PCT as it left the glomerulus. This gradient of PCT innervation can be explained by the different tubulovascular relationships of nephrons at different levels of the cortex. The absence of innervation sites in the early PCT of subcapsular nephrons suggests that any influence of the renal nerves on the early PCT might be due to an effect of neurotransmitter released from renal nerves reaching the early PCT via the interstitium and/or capillaries.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
A quantum algorithm for Viterbi decoding of classical convolutional codes
NASA Astrophysics Data System (ADS)
Grice, Jon R.; Meyer, David A.
2015-07-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.
A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras
NASA Astrophysics Data System (ADS)
Angel, Eitan
2010-09-01
In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.
A test of the double-shearing model of flow for granular materials
Savage, J.C.; Lockner, D.A.
1997-01-01
The double-shearing model of flow attributes plastic deformation in granular materials to cooperative slip on conjugate Coulomb shears (surfaces upon which the Coulomb yield condition is satisfied). The strict formulation of the double-shearing model then requires that the slip lines in the material coincide with the Coulomb shears. Three different experiments that approximate simple shear deformation in granular media appear to be inconsistent with this strict formulation. For example, the orientation of the principal stress axes in a layer of sand driven in steady, simple shear was measured subject to the assumption that the Coulomb failure criterion was satisfied on some surfaces (orientation unspecified) within the sand layer. The orientation of the inferred principal compressive axis was then compared with the orientations predicted by the double-shearing model. The strict formulation of the model [Spencer, 1982] predicts that the principal stress axes should rotate in a sense opposite to that inferred from the experiments. A less restrictive formulation of the double-shearing model by de Josselin de Jong [1971] does not completely specify the solution but does prescribe limits on the possible orientations of the principal stress axes. The orientations of the principal compression axis inferred from the experiments are probably within those limits. An elastoplastic formulation of the double-shearing model [de Josselin de Jong, 1988] is reasonably consistent with the experiments, although quantitative agreement was not attained. Thus we conclude that the double-shearing model may be a viable law to describe deformation of granular materials, but the macroscopic slip surfaces will not in general coincide with the Coulomb shears.
NASA Astrophysics Data System (ADS)
Kaneko, Tomoyuki; Nomura, Fumimasa; Yasuda, Kenji
2011-07-01
A model for the quasi-in vivo heart electrocardiogram (ECG) measurement of the ST period has been developed. As the part of ECG data at the ST period is the convolution of the extracellular field potentials (FPs) of cardiomyocytes in a ventricle, we have fabricated a lined-up cardiomyocyte cell-network on a lined-up microelectrode array and a circular microelectrode in an agarose microchamber, and measured the convoluted FPs. When the ventricular tachyarrhythmias of beating occurred in the cardiomyocyte network, the convoluted FP profile showed similar arrhythmia ECG-like profiles, indicating the convoluted FPs of the in vitro cell network include both the depolarization data and the propagation manner of beating in the heart.
The long and the short of it: modelling double neutron star and collapsar Galactic dynamics
NASA Astrophysics Data System (ADS)
Kiel, Paul D.; Hurley, Jarrod R.; Bailes, Matthew
2010-07-01
Understanding the nature of galactic populations of double compact binaries (where both stars are a neutron star or black hole) has been a topic of interest for many years, particularly the coalescence rate of these binaries. The only observed systems thus far are double neutron star systems containing one or more radio pulsars. However, theorists have postulated that short-duration gamma-ray bursts may be evidence of coalescing double neutron star or neutron star-black hole binaries, while long-duration gamma-ray bursts are possibly formed by tidally enhanced rapidly rotating massive stars that collapse to form black holes (collapsars). The work presented here examines populations of double compact binary systems and tidally enhanced collapsars. We make use of BINPOP and BINKIN, two components of a recently developed population synthesis package. Results focus on correlations of both binary and spatial evolutionary population characteristics. Pulsar and long-duration gamma-ray burst observations are used in concert with our models to draw the conclusions that (i) double neutron star binaries can merge rapidly on time-scales of a few million years (much less than that found for the observed double neutron star population), (ii) common-envelope evolution within these models is a very important phase in double neutron star formation and (iii) observations of long gamma-ray burst projected distances are more centrally concentrated than our simulated coalescing double neutron star and collapsar Galactic populations. Better agreement is found with dwarf galaxy models although the outcome is strongly linked to the assumed birth radial distribution. The birth rate of the double neutron star population in our models ranges from 4 to 160 Myr-1 and the merger rate ranges from 3 to 150 Myr-1. The upper and lower limits of the rates result from including electron-capture supernova kicks to neutron stars and decreasing the common-envelope efficiency, respectively. Our double
Double and single pion photoproduction within a dynamical coupled-channels model
Hiroyuki Kamano; Julia-Diaz, Bruno; Lee, T. -S. H.; Matsuyama, Akihiko; Sato, Toru
2009-12-16
Within a dynamical coupled-channels model which has already been fixed from analyzing the data of the πN → πN and γN → πN reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W < 1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects due to the direct γN → ππN mechanism, the interplay between the resonant and non-resonant amplitudes, and the coupled-channels effects. As a result, the model parameters which can be determined mostmore » effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.« less
Haag duality for Kitaev’s quantum double model for abelian groups
NASA Astrophysics Data System (ADS)
Fiedler, Leander; Naaijkens, Pieter
2015-11-01
We prove Haag duality for cone-like regions in the ground state representation corresponding to the translational invariant ground state of Kitaev’s quantum double model for finite abelian groups. This property says that if an observable commutes with all observables localized outside the cone region, it actually is an element of the von Neumann algebra generated by the local observables inside the cone. This strengthens locality, which says that observables localized in disjoint regions commute. As an application, we consider the superselection structure of the quantum double model for abelian groups on an infinite lattice in the spirit of the Doplicher-Haag-Roberts program in algebraic quantum field theory. We find that, as is the case for the toric code model on an infinite lattice, the superselection structure is given by the category of irreducible representations of the quantum double.
Numerical analysis of the double scaling limit in the string type IIB matrix model.
Horata, S; Egawa, H S
2001-05-14
The bosonic IIB matrix model is studied using a numerical method. This model contains the bosonic part of the IIB matrix model conjectured to be a nonperturbative definition of the type IIB superstring theory. The large N scaling behavior of the model is shown performing a Monte Carlo simulation. The expectation value of the Wilson loop operator is measured and the string tension is estimated. The numerical results show the prescription of the double scaling limit. PMID:11384258
Convolutional neural network approach for buried target recognition in FL-LWIR imagery
NASA Astrophysics Data System (ADS)
Stone, K.; Keller, J. M.
2014-05-01
A convolutional neural network (CNN) approach to recognition of buried explosive hazards in forward-looking long-wave infrared (FL-LWIR) imagery is presented. The convolutional filters in the first layer of the network are learned in the frequency domain, making enforcement of zero-phase and zero-dc response characteristics much easier. The spatial domain representations of the filters are forced to have unit l2 norm, and penalty terms are added to the online gradient descent update to encourage orthonormality among the convolutional filters, as well smooth first and second order derivatives in the spatial domain. The impact of these modifications on the generalization performance of the CNN model is investigated. The CNN approach is compared to a second recognition algorithm utilizing shearlet and log-gabor decomposition of the image coupled with cell-structured feature extraction and support vector machine classification. Results are presented for multiple FL-LWIR data sets recently collected from US Army test sites. These data sets include vehicle position information allowing accurate transformation between image and world coordinates and realistic evaluation of detection and false alarm rates.
Two dimensional convolute integers for machine vision and image recognition
NASA Technical Reports Server (NTRS)
Edwards, Thomas R.
1988-01-01
Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
A SPICE model of double-sided Si microstrip detectors
Candelori, A.; Paccagnella, A. |; Bonin, F.
1996-12-31
We have developed a SPICE model for the ohmic side of AC-coupled Si microstrip detectors with interstrip isolation via field plates. The interstrip isolation has been measured in various conditions by varying the field plate voltage. Simulations have been compared with experimental data in order to determine the values of the model parameters for different voltages applied to the field plates. The model is able to predict correctly the frequency dependence of the coupling between adjacent strips. Furthermore, we have used such model for the study of the signal propagation along the detector when a current signal is injected in a strip. Only electrical coupling is considered here, without any contribution due to charge sharing derived from carrier diffusion. For this purpose, the AC pads of the strips have been connected to a read-out electronics and the current signal has been injected into a DC pad. Good agreement between measurements and simulations has been reached for the central strip and the first neighbors. Experimental tests and computer simulations have been performed for four different strip and field plate layouts, in order to investigate how the detector geometry affects the parameters of the SPICE model and the signal propagation.
Testing the Double Corner Source Spectral Model for Long- and Short-Period Ground Motion Simulations
NASA Astrophysics Data System (ADS)
Miyake, H.; Koketsu, K.
2010-12-01
The omega-squared source model with a single corner frequency is widely used in the earthquake source analyses and ground motion simulations. Recent studies show that the Brune stress drop of subduction-zone earthquakes is almost half of that for crustal earthquakes for a given magnitude. On the other hand, the empirical attenuation relations and spectral analyses of seismic source and ground motions support the fact that subduction-zone earthquakes provide 1-2 times of the short-period source spectral level for crustal earthquakes. To link long- and short-period source characteristics is a crucial issue to perform broadband ground motion simulations. This discrepancy may lead the source modeling with double corner frequencies [e.g., Atkinson, 1993]. We modeled the lower corner frequency corresponding to the size of asperities generating for long-period (> 2-5 s) ground motions by the deterministic approach and the higher corner frequency corresponding to the size of strong motion generation area for short-period ground motions by the semi-empirical approach. We propose that the double corner source spectral model is expressed as a frequency-dependent source model consists of either the asperities in a long-period range or the strong motion generation area in a short-period range and the surrounding background area inside the total rupture area. The characterized source model has been the potential to reproduce fairly well the rupture directivity pulses seen in the observed ground motions. We explore the applicability of the double corner source spectral model to broadband ground motion simulations for the 1978 Mw 7.6 Miyagi-oki and 2003 Mw 8.3 Tokachi-oki earthquakes along the Japan Trench. For both cases, the double corner source spectral model, where the size and stress drop for strong motion generation areas are respectively half and double of those for asperities, worked well to reproduce ground motion time histories and seismic intensity distribution.
Robustly optimal rate one-half binary convolutional codes
NASA Technical Reports Server (NTRS)
Johannesson, R.
1975-01-01
Three optimality criteria for convolutional codes are considered in this correspondence: namely, free distance, minimum distance, and distance profile. Here we report the results of computer searches for rate one-half binary convolutional codes that are 'robustly optimal' in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria. Comparisons with previously known codes are made. The results of a computer simulation are reported to show the importance of the distance profile to computational performance with sequential decoding.
Spiral to ferromagnetic transition in a Kondo lattice model with a double-well potential
NASA Astrophysics Data System (ADS)
Caro, R. C.; Franco, R.; Silva-Valencia, J.
2016-02-01
Using the density matrix renormalization group method, we study a system of 171Yb atoms confined in a one-dimensional optical lattice. The atoms in the 1So state undergo a double-well potential, whereas the atoms in the 3P0 state are localized. This system is modelled by the Kondo lattice model plus a double-well potential for the free carries. We obtain phase diagrams composed of ferromagnetic and spiral phases, where the critical points always increase with the interwell tunneling parameter. We conclude that this quantum phase transition can be tuned by the double-well potential parameters as well as by the common parameters: local coupling and density.
NASA Astrophysics Data System (ADS)
Yan-hui, Xin; Sheng, Yuan; Ming-tang, Liu; Hong-xia, Liu; He-cai, Yuan
2016-03-01
The two-dimensional models for symmetrical double-material double-gate (DM-DG) strained Si (s-Si) metal-oxide semiconductor field effect transistors (MOSFETs) are presented. The surface potential and the surface electric field expressions have been obtained by solving Poisson’s equation. The models of threshold voltage and subthreshold current are obtained based on the surface potential expression. The surface potential and the surface electric field are compared with those of single-material double-gate (SM-DG) MOSFETs. The effects of different device parameters on the threshold voltage and the subthreshold current are demonstrated. The analytical models give deep insight into the device parameters design. The analytical results obtained from the proposed models show good matching with the simulation results using DESSIS. Project supported by the National Natural Science Foundation of China (Grant Nos. 61376099, 11235008, and 61205003).
Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel
NASA Technical Reports Server (NTRS)
Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.
1989-01-01
A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.
Period-doubling bifurcation and high-order resonances in RR Lyrae hydrodynamical models
NASA Astrophysics Data System (ADS)
Kolláth, Z.; Molnár, L.; Szabó, R.
2011-06-01
We investigated period doubling, a well-known phenomenon in dynamical systems, for the first time in RR Lyrae models. These studies provide theoretical background for the recent discovery of period doubling in some Blazhko RR Lyrae stars with the Kepler space telescope. Since period doubling has been observed only in Blazhko-modulated stars so far, the phenomenon can help in understanding the modulation as well. Utilizing the Florida-Budapest turbulent convective hydrodynamical code, we have identified the phenomenon in both radiative and convective models. A period-doubling cascade was also followed up to an eight-period solution, confirming that destabilization of the limit cycle is indeed the underlying phenomenon. Floquet stability roots were calculated to investigate the possible causes and occurrences of the phenomenon. A two-dimensional diagnostic diagram was constructed to illustrate the various resonances between the fundamental mode and the different overtones. Combining the two tools, we confirmed that the period-doubling instability is caused by a 9:2 resonance between the ninth overtone and the fundamental mode. Destabilization of the limit cycle by a resonance of a high-order mode is possible because the overtone is a strange mode. The resonance is found to be strong enough to shift the period of overtone by up to 10 per cent. Our investigations suggest that a more complex interplay of radial (and presumably non-radial) modes could happen in RR Lyrae stars that might have connections with the Blazhko effect as well.
Family Stress and Adaptation to Crises: A Double ABCX Model of Family Behavior.
ERIC Educational Resources Information Center
McCubbin, Hamilton I.; Patterson, Joan M.
Recent developments in family stress and coping research and a review of data and observations of families in a war-induced crisis situation led to an investigation of the relationship between a stressor and family outcomes. The study, based on the Double ABCX Model in which A (the stressor event) interacts with B (the family's crisis-meeting…
Creating a Double-Spring Model to Teach Chromosome Movement during Mitosis & Meiosis
ERIC Educational Resources Information Center
Luo, Peigao
2012-01-01
The comprehension of chromosome movement during mitosis and meiosis is essential for understanding genetic transmission, but students often find this process difficult to grasp in a classroom setting. I propose a "double-spring model" that incorporates a physical demonstration and can be used as a teaching tool to help students understand this…
Double Higgs production in the Two Higgs Doublet Model at the linear collider
Arhrib, Abdesslam; Benbrik, Rachid; Chiang, C.-W.
2008-04-21
We study double Higgs-strahlung production at the future Linear Collider in the framework of the Two Higgs Doublet Models through the following channels: e{sup +}e{sup -}{yields}{phi}{sub i}{phi}{sub j}Z, {phi}{sub i} = h deg., H deg., A deg. All these processes are sensitive to triple Higgs couplings. Hence observations of them provide information on the triple Higgs couplings that help reconstructing the scalar potential. We discuss also the double Higgs-strahlung e{sup +}e{sup -}{yields}h deg. h deg. Z in the decoupling limit where h deg. mimics the SM Higgs boson.
Die and telescoping punch form convolutions in thin diaphragm
NASA Technical Reports Server (NTRS)
1965-01-01
Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374
SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters.
Hui, Kerwin; Chai, Jeng-Da
2016-01-28
By incorporating the nonempirical strongly constrained and appropriately normed (SCAN) semilocal density functional [J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction problems and noncovalent interactions. In particular, SCAN0-2, which includes about 79% of Hartree-Fock exchange and 50% of second-order Møller-Plesset correlation, is shown to be reliably accurate for a very diverse range of applications, such as thermochemistry, kinetics, noncovalent interactions, and self-interaction problems. PMID:26827209
Modelling and control of double-cone dielectric elastomer actuator
NASA Astrophysics Data System (ADS)
Branz, F.; Francesconi, A.
2016-09-01
Among various dielectric elastomer devices, cone actuators are of large interest for their multi-degree-of-freedom design. These objects combine the common advantages of dielectric elastomers (i.e. solid-state actuation, self-sensing capability, high conversion efficiency, light weight and low cost) with the possibility to actuate more than one degree of freedom in a single device. The potential applications of this feature in robotics are huge, making cone actuators very attractive. This work focuses on rotational degrees of freedom to complete existing literature and improve the understanding of such aspect. Simple tools are presented for the performance prediction of the device: finite element method simulations and interpolating relations have been used to assess the actuator steady-state behaviour in terms of torque and rotation as a function of geometric parameters. Results are interpolated by fit relations accounting for all the relevant parameters. The obtained data are validated through comparison with experimental results: steady-state torque and rotation are determined at a given high voltage actuation. In addition, the transient response to step input has been measured and, as a result, the voltage-to-torque and the voltage-to-rotation transfer functions are obtained. Experimental data are collected and used to validate the prediction capability of the transfer function in terms of time response to step input and frequency response. The developed static and dynamic models have been employed to implement a feedback compensator that controls the device motion; the simulated behaviour is compared to experimental data, resulting in a maximum prediction error of 7.5%.
A diabatic state model for double proton transfer in hydrogen bonded complexes
McKenzie, Ross H.
2014-09-14
Four diabatic states are used to construct a simple model for double proton transfer in hydrogen bonded complexes. Key parameters in the model are the proton donor-acceptor separation R and the ratio, D{sub 1}/D{sub 2}, between the proton affinity of a donor with one and two protons. Depending on the values of these two parameters the model describes four qualitatively different ground state potential energy surfaces, having zero, one, two, or four saddle points. Only for the latter are there four stable tautomers. In the limit D{sub 2} = D{sub 1} the model reduces to two decoupled hydrogen bonds. As R decreases a transition can occur from a synchronous concerted to an asynchronous concerted to a sequential mechanism for double proton transfer.
A model of phase transitions in double-well Morse potential: Application to hydrogen bond
NASA Astrophysics Data System (ADS)
Goryainov, S. V.
2012-11-01
A model of phase transitions in double-well Morse potential is developed. Application of this model to the hydrogen bond is based on ab initio electron density calculations, which proved that the predominant contribution to the hydrogen bond energy originates from the interaction of proton with the electron shells of hydrogen-bonded atoms. This model uses a double-well Morse potential for proton. Analytical expressions for the hydrogen bond energy and the frequency of O-H stretching vibrations were obtained. Experimental data on the dependence of O-H vibration frequency on the bond length were successfully fitted with model-predicted dependences in classical and quantum mechanics approaches. Unlike empirical exponential function often used previously for dependence of O-H vibration frequency on the hydrogen bond length (Libowitzky, Mon. Chem., 1999, vol.130, 1047), the dependence reported here is theoretically substantiated.
Neutrinoless double beta decay in type I+II seesaw models
NASA Astrophysics Data System (ADS)
Borah, Debasish; Dasgupta, Arnab
2015-11-01
We study neutrinoless double beta decay in left-right symmetric extension of the standard model with type I and type II seesaw origin of neutrino masses. Due to the enhanced gauge symmetry as well as extended scalar sector, there are several new physics sources of neutrinoless double beta decay in this model. Ignoring the left-right gauge boson mixing and heavy-light neutrino mixing, we first compute the contributions to neutrinoless double beta decay for type I and type II dominant seesaw separately and compare with the standard light neutrino contributions. We then repeat the exercise by considering the presence of both type I and type II seesaw, having non-negligible contributions to light neutrino masses and show the difference in results from individual seesaw cases. Assuming the new gauge bosons and scalars to be around a TeV, we constrain different parameters of the model including both heavy and light neutrino masses from the requirement of keeping the new physics contribution to neutrinoless double beta decay amplitude below the upper limit set by the GERDA experiment and also satisfying bounds from lepton flavor violation, cosmology and colliders.
Double Higgs production at LHC, see-saw type-II and Georgi-Machacek model
Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.
2015-03-15
The double Higgs production in the models with isospin-triplet scalars is studied. It is shown that in the see-saw type-II model, the mode with an intermediate heavy scalar, pp → H + X → 2h + X, may have the cross section that is comparable with that in the Standard Model. In the Georgi-Machacek model, this cross section could be much larger than in the Standard Model because the vacuum expectation value of the triplet can be large.
Text-Attentional Convolutional Neural Network for Scene Text Detection
NASA Astrophysics Data System (ADS)
He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian
2016-06-01
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.
Text-Attentional Convolutional Neural Network for Scene Text Detection.
He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian
2016-06-01
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results. PMID:27093723
Double-multiple streamtube model for studying vertical-axis wind turbines
Paraschivoiu, I.
1988-08-01
This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor. 32 references.
Double-multiple streamtube model for studying vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Paraschivoiu, Ion
1988-08-01
This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.
Parallel double-plate capacitive proximity sensor modelling based on effective theory
Li, Nan Zhu, Haiye; Wang, Wenyu; Gong, Yu
2014-02-15
A semi-analytical model for a double-plate capacitive proximity sensor is presented according to the effective theory. Three physical models are established to derive the final equation of the sensor. Measured data are used to determine the coefficients. The final equation is verified by using measured data. The average relative error of the calculated and the measured sensor capacitance is less than 7.5%. The equation can be used to provide guidance to engineering design of the proximity sensors.
Explicit drain current model of junctionless double-gate field-effect transistors
NASA Astrophysics Data System (ADS)
Yesayan, Ashkhen; Prégaldiny, Fabien; Sallese, Jean-Michel
2013-11-01
This paper presents an explicit drain current model for the junctionless double-gate metal-oxide-semiconductor field-effect transistor. Analytical relationships for the channel charge densities and for the drain current are derived as explicit functions of applied terminal voltages and structural parameters. The model is validated with 2D numerical simulations for a large range of channel thicknesses and is found to be very accurate for doping densities exceeding 1018 cm-3, which are actually used for such devices.
Evaluation of convolutional neural networks for visual recognition.
Nebauer, C
1998-01-01
Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed. PMID:18252491
A simulation study of the performance of the NASA (2,1,6) convolutional code on RFI/burst channels
NASA Technical Reports Server (NTRS)
Perez, Lance C.; Costello, Daniel J., Jr.
1993-01-01
In an earlier report, the LINKABIT Corporation studied the performance of the (2,1,6) convolutional code on the radio frequency interference (RFI)/burst channel using analytical methods. Using an R(sub 0) analysis, the report concluded that channel interleaving was essential to achieving reliable performance. In this report, Monte Carlo simulation techniques are used to study the performance of the convolutional code on the RFI/burst channel in more depth. The basic system model under consideration is shown. The convolutional code is the NASA standard code with generators g(exp 1) = 1+D(exp 2)+D(exp 3)+D(exp 5)+D(exp 6) and g(exp 2) = 1+D+D(exp 2)+D(exp 3)+D(exp 6) and d(sub free) = 10. The channel interleaver is of the convolutional or periodic type. The binary output of the channel interleaver is transmitted across the channel using binary phase shift keying (BPSK) modulation. The transmitted symbols are corrupted by an RFI/burst channel consisting of a combination of additive white Gaussian noise (AWGN) and RFI pulses. At the receiver, a soft-decision Viterbi decoder with no quantization and variable truncation length is used to decode the deinterleaved sequence.
Tonkin, J.W.; Balistrieri, L.S.; Murray, J.W.
2004-01-01
Manganese oxides are important scavengers of trace metals and other contaminants in the environment. The inclusion of Mn oxides in predictive models, however, has been difficult due to the lack of a comprehensive set of sorption reactions consistent with a given surface complexation model (SCM), and the discrepancies between published sorption data and predictions using the available models. The authors have compiled a set of surface complexation reactions for synthetic hydrous Mn oxide (HMO) using a two surface site model and the diffuse double layer SCM which complements databases developed for hydrous Fe (III) oxide, goethite and crystalline Al oxide. This compilation encompasses a range of data observed in the literature for the complex HMO surface and provides an error envelope for predictions not well defined by fitting parameters for single or limited data sets. Data describing surface characteristics and cation sorption were compiled from the literature for the synthetic HMO phases birnessite, vernadite and ??-MnO2. A specific surface area of 746 m2g-1 and a surface site density of 2.1 mmol g-1 were determined from crystallographic data and considered fixed parameters in the model. Potentiometric titration data sets were adjusted to a pH1EP value of 2.2. Two site types (???XOH and ???YOH) were used. The fraction of total sites attributed to ???XOH (??) and pKa2 were optimized for each of 7 published potentiometric titration data sets using the computer program FITEQL3.2. pKa2 values of 2.35??0.077 (???XOH) and 6.06??0.040 (???YOH) were determined at the 95% confidence level. The calculated average ?? value was 0.64, with high and low values ranging from 1.0 to 0.24, respectively. pKa2 and ?? values and published cation sorption data were used subsequently to determine equilibrium surface complexation constants for Ba2+, Ca2+, Cd 2+, Co2+, Cu2+, Mg2+, Mn 2+, Ni2+, Pb2+, Sr2+ and Zn 2+. In addition, average model parameters were used to predict additional
The Dynamics of a Double-Layer Along an Auroral Field Line: An Improved Model
NASA Astrophysics Data System (ADS)
Barakat, A. R.
2004-12-01
The auroral field lines represent an important channel through which the ionosphere and the magnetosphere exchange mass, momentum, and energy. When the cold, dense ionospheric plasma interacts with sufficiently warm magnetospheric plasma along the field lines (with upward currents), double layers form with large parallel potential drops. The potential drops accelerate ionospheric ions, which in turn cause ion-beam-driven instabilities. The resulting wave-particle interactions (WPI) further heat the plasma, and hence, influence the behavior of the double layer. Understanding the coupling between these microscale and macroscale processes is crucial in quantifying the ionosphere-magnetosphere (I-M) coupling. Previous theoretical studies addressed the different facets of the problem separately. We developed a particle-in-cell (PIC) model that simulate the behavior of the double layer along auroral field lines, with special emphasis on the effect of the current along filed lines. Moreover, our model includes the effects of ionospheric collision processes, gravity, magnetic mirror force, electrostatic fields, as well as wave instabilities, propagation, and wave-particle interactions. The resulting self-consistent electrodynamics of the plasma in an auroral flux tube with an upward current is presented with emphasis on the formation and evolution of the double layer. In particular, we address questions such as: (1) what is the I-V relationship along the auroral field line, and (2) how the potential drop is distributed along the filed lines. These, and other results, are presented.
The Dynamics of a Double-Layer Along an Auroral Field Line: A Unified Model
NASA Astrophysics Data System (ADS)
Barakat, A.; Singh, N.
The auroral field lines represent an important channel through which the ionosphere and the magnetosphere exchange mass, momentum, and energy. When the cold, dense ionospheric plasma interacts with sufficiently warm magnetospheric plasma along the field lines (with upward currents), double layers form with large parallel potential drops. The potential drops accelerate ionospheric ions, which in turn cause ion-beam-driven instabilities. The resulting wave-particle interactions (WPI) further heat the plasma, and hence, influence the behavior of the double layer. Understanding the coupling between these microscale and macroscale processes is crucial in quantifying the ionosphere-magnetosphere (I-M) coupling. Previous theoretical studies addressed the different facets of the problem separately. They predicted, in agreement with observations, the formation of the double layer, ion beams, and ion heating due to WPI. We developed a comprehensive model for this problem that is based on a macroscopic PIC approach. Our model properly accounts for the transport phenomena, as well as the small-scale waves. For example, it includes the effects of ionospheric collision processes, gravity, magnetic mirror force, electrostatic fields, as well as wave instabilities, propagation, and wave-particle interactions. The resulting self-consistent electrodynamics of the plasma in an auroral flux tube with an upward current is presented with emphasis on the formation and evolution of the double layer.
NASA Astrophysics Data System (ADS)
Bakry, A.; Abdulrhmann, S.; Ahmed, M.
2016-06-01
We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.
Toward a nonlinearity model for a heterodyne interferometer: not based on double-frequency mixing.
Hu, Pengcheng; Bai, Yang; Zhao, Jinlong; Wu, Guolong; Tan, Jiubin
2015-10-01
Residual periodic errors detected in picometer-level heterodyne interferometers cannot be explained by the model based on double-frequency mixing. A new model is established and proposed in this paper for analysis of these errors. The multi-order Doppler frequency shift ghost beams from measurement beam itself are involved in final interference leading to multi-order periodic errors, whether or not frequency-mixing originating from the two incident beams occurs. For model validation, a novel setup free from double-frequency mixing is constructed. The analyzed measurement signal shows that phase mixing of measurement beam itself can lead to multi-order periodic errors ranging from tens of picometers to one nanometer. PMID:26480108
Predictive double-layer modeling of metal sorption in mine-drainage systems
Smith, K.S.; Plumlee, G.S.; Ranville, J.F.; Macalady, D.L.
1996-10-01
Previous comparison of predictive double-layer modeling and empirically derived metal-partitioning data has validated the use of the double-layer model to predict metal sorption reactions in iron-rich mine-drainage systems. The double-layer model subsequently has been used to model data collected from several mine-drainage sites in Colorado with diverse geochemistry and geology. This work demonstrates that metal partitioning between dissolved and sediment phases can be predictively modeled simply by knowing the water chemistry and the amount of suspended iron-rich particulates present in the system. Sorption on such iron-rich suspended sediments appears to control metal and arsenic partitioning between dissolved and sediment phases, with sorption on bed sediment playing a limited role. At pH > 5, Pb and As are largely sorbed by iron-rich suspended sediments and Cu is partially sorbed; Zn, Cd, and Ni usually remain dissolved throughout the pH range of 3 to 8.
NASA Technical Reports Server (NTRS)
Desai, S. D.; Yuan, D. -N.
2006-01-01
A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.
Zhu, H.; Mehrabadi, M.; Massoudi, M.
2007-04-25
In this paper, we consider the mechanical response of granular materials and compare the predictions of a hypoplastic model with that of a recently developed dilatant double shearing model which includes the effects of fabric. We implement the constitutive relations of the dilatant double shearing model and the hypoplastic model in the finite element program ABACUS/Explicit and compare their predictions in the triaxial compression and cyclic shear loading tests. Although the origins and the constitutive relations of the double shearing model and the hypoplastic model are quite different, we find that both models are capable of capturing typical behaviours of granular materials. This is significant because while hypoplasticity is phenomenological in nature, the double shearing model is based on a kinematic hypothesis and microstructural considerations, and can easily be calibrated through standard tests.
NASA Technical Reports Server (NTRS)
Arav, Nahum; Begelman, Mitchell C.
1994-01-01
We present a model explaining the double trough, separated by delta v approximately = 5900 km/s, observed in the C IV lambda-1549 broad absorption line (BAL) in a number of BALQSOs. The model is based on radiative acceleration of the BAL outflow, and the troughs result from modulations in the radiative force. Specifically, where the strong flux from the Lyman-alpha lambda-1215 broad emission line is redshifted to the frequency of the N V lambda-1240 resonance line, in the rest frame of the accelerating N V ions, the acceleration increases and the absorption is reduced. At higher velocities the Lyman-alpha emission is redshifted out of the resonance and the N V ions experience a declining flux which causes the second absorption trough. A strongly nonlinear relationship between changes in the flux and the optical depth in the lines is shown to amplify the expected effect. This model produces double troughs for which the shallowest absorption between the two troughs occurs at v approximately = 5900 km/s. Indeed, we find that a substantial number of the observed objects show this feature. A prediction of the model is that all BALQSOs that show a double-trough signature will be found to have an intrinsic sharp drop in their spectra shortward of approximately 1200 A.
Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding
Johnson, Rie; Zhang, Tong
2016-01-01
This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks. PMID:27087766
Schiwietz, G.; Grande, P. L.
2011-11-15
Recent developments in the theoretical treatment of electronic energy losses of bare and screened ions in gases are presented. Specifically, the unitary-convolution-approximation (UCA) stopping-power model has proven its strengths for the determination of nonequilibrium effects for light as well as heavy projectiles at intermediate to high projectile velocities. The focus of this contribution will be on the UCA and its extension to specific projectile energies far below 100 keV/u, by considering electron-capture contributions at charge-equilibrium conditions.
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
Geodesic acoustic mode in anisotropic plasmas using double adiabatic model and gyro-kinetic equation
Ren, Haijun; Cao, Jintao
2014-12-15
Geodesic acoustic mode in anisotropic tokamak plasmas is theoretically analyzed by using double adiabatic model and gyro-kinetic equation. The bi-Maxwellian distribution function for guiding-center ions is assumed to obtain a self-consistent form, yielding pressures satisfying the magnetohydrodynamic (MHD) anisotropic equilibrium condition. The double adiabatic model gives the dispersion relation of geodesic acoustic mode (GAM), which agrees well with the one derived from gyro-kinetic equation. The GAM frequency increases with the ratio of pressures, p{sub ⊥}/p{sub ∥}, and the Landau damping rate is dramatically decreased by p{sub ⊥}/p{sub ∥}. MHD result shows a low-frequency zonal flow existing for all p{sub ⊥}/p{sub ∥}, while according to the kinetic dispersion relation, no low-frequency branch exists for p{sub ⊥}/p{sub ∥}≳ 2.
Relation of the double-ITCZ bias to the atmospheric energy budget in climate models
NASA Astrophysics Data System (ADS)
Adam, Ori; Schneider, Tapio; Brient, Florent; Bischoff, Tobias
2016-07-01
We examine how tropical zonal mean precipitation biases in current climate models relate to the atmospheric energy budget. Both hemispherically symmetric and antisymmetric tropical precipitation biases contribute to the well-known double-Intertropical Convergence Zone (ITCZ) bias; however, they have distinct signatures in the energy budget. Hemispherically symmetric biases in tropical precipitation are proportional to biases in the equatorial net energy input; hemispherically antisymmetric biases are proportional to the atmospheric energy transport across the equator. Both relations can be understood within the framework of recently developed theories. Atmospheric net energy input biases in the deep tropics shape both the symmetric and antisymmetric components of the double-ITCZ bias. Potential causes of these energetic biases and their variation across climate models are discussed.
Role of Double-Porosity Dual-Permeability Models for Multi-Resonance Geomechanical Systems
Berryman, J G
2005-05-18
It is known that Biot's equations of poroelasticity (Biot 1956; 1962) follow from a scale-up of the microscale equations of elasticity coupled to the Navier-Stokes equations for fluid flow (Burridge and Keller, 1981). Laboratory measurements by Plona (1980) have shown that Biot's equations indeed hold for simple systems (Berryman, 1980), but heterogeneous systems can have quite different behavior (Berryman, 1988). So the question arises whether there is one level--or perhaps many levels--of scale-up needed to arrive at equations valid for the reservoir scale? And if so, do these equations take the form of Biot's equations or some other form? We will discuss these issues and show that the double-porosity dual-permeability equations (Berryman and Wang, 1995; Berryman and Pride, 2002; Pride and Berryman, 2003a,b; Pride et al., 2004) play a special role in the scale-up to equations describing multi-resonance reservoir behavior, for fluid pumping and geomechanics, as well as seismic wave propagation. The reason for the special significance of double-porosity models is that a multi-resonance system can never be adequately modeled using a single resonance model, but can often be modeled with reasonable accuracy using a two-resonance model. Although ideally one would prefer to model multi-resonance systems using the correct numbers, locations, widths, and amplitudes of the resonances, data are often inadequate to resolve all these pertinent model parameters in this complex inversion task. When this is so, the double-porosity model is most useful as it permits us to capture the highest and lowest detectable resonances of the system and then to interpolate through the middle range of frequencies.
a Convolutional Network for Semantic Facade Segmentation and Interpretation
NASA Astrophysics Data System (ADS)
Schmitz, Matthias; Mayer, Helmut
2016-06-01
In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.
Study on Expansion of Convolutional Compactors over Galois Field
NASA Astrophysics Data System (ADS)
Arai, Masayuki; Fukumoto, Satoshi; Iwasaki, Kazuhiko
Convolutional compactors offer a promising technique of compacting test responses. In this study we expand the architecture of convolutional compactor onto a Galois field in order to improve compaction ratio as well as reduce X-masking probability, namely, the probability that an error is masked by unknown values. While each scan chain is independently connected by EOR gates in the conventional arrangement, the proposed scheme treats q signals as an element over GF(2q), and the connections are configured on the same field. We show the arrangement of the proposed compactors and the equivalent expression over GF(2). We then evaluate the effectiveness of the proposed expansion in terms of X-masking probability by simulations with uniform distribution of X-values, as well as reduction of hardware overheads. Furthermore, we evaluate a multi-weight arrangement of the proposed compactors for non-uniform X distributions.
Two-dimensional convolute integers for analytical instrumentation
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1982-01-01
As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.
Image Super-Resolution Using Deep Convolutional Networks.
Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou
2016-02-01
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. PMID:26761735
Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit
NASA Technical Reports Server (NTRS)
Smith, Robert A.
1987-01-01
The evolution and long-time stability of a double layer in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double-layer potential structure. A simple model is presented in which this current re-distribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double-layer potential. The flank charging may be represented as that of a nonlinear transmission. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a 1-d simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.
Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit
NASA Technical Reports Server (NTRS)
Smith, Robert A.
1987-01-01
The evolution and long-time stability of a double layer (DL) in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double layer potential structure. A simple model is presented in which this current redistribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double layer potential. The flank charging may be represented as that of a nonlinear transmission line. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a one-dimensional simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.
Dystrophin and Dysferlin Double Mutant Mice: A Novel Model For Rhabdomyosarcoma
Hosur, Vishnu; Kavirayani, Anoop; Riefler, Jennifer; Carney, Lisa M.B.; Lyons, Bonnie; Gott, Bruce; Cox, Gregory A.; Shultz, Leonard D.
2012-01-01
While researchers are yet to establish a link a between muscular dystrophy (MD) and sarcomas in human patients, literature suggests that MD genes dystrophin and dysferlin act as tumor suppressor genes in mouse models of MD. For instance, dystrophin deficient mdx and dysferlin deficient A/J mice, models of human Duchenne Muscular Dystrophy and Limb Girdle Muscular Dystrophy type 2B, respectively, develop mixed sarcomas with variable penetrance and latency. To further establish the correlation between MD and sarcoma development, and to test whether a combined deletion of dystrophin and dysferlin exacerbates MD and augments the incidence of sarcomas, we generated dystrophin and dysferlin double mutant mice (STOCK-Dysfprmd Dmdmdx-5Cv). Not surprisingly, the double mutant mice develop severe MD symptoms and moreover develop rhabdomyosarcoma at an average age of 12 months, with an incidence of > 90%. Histological and immunohistochemical analyses, using a panel of antibodies against skeletal muscle cell proteins, electron microscopy, cytogenetics, and molecular analysis reveal that the double mutant mice develop rhabdomyosarcoma. The present finding bolsters the correlation between MD and sarcomas, and provides a model not only to examine the cellular origins but also to identify mechanisms and signal transduction pathways triggering development of RMS. PMID:22682622
Experiments and Modeling of Boric Acid Permeation through Double-Skinned Forward Osmosis Membranes.
Luo, Lin; Zhou, Zhengzhong; Chung, Tai-Shung; Weber, Martin; Staudt, Claudia; Maletzko, Christian
2016-07-19
Boron removal is one of the great challenges in modern wastewater treatment, owing to the unique small size and fast diffusion rate of neutral boric acid molecules. As forward osmosis (FO) membranes with a single selective layer are insufficient to reject boron, double-skinned FO membranes with boron rejection up to 83.9% were specially designed for boron permeation studies. The superior boron rejection properties of double-skinned FO membranes were demonstrated by theoretical calculations, and verified by experiments. The double-skinned FO membrane was fabricated using a sulfonated polyphenylenesulfone (sPPSU) polymer as the hydrophilic substrate and polyamide as the selective layer material via interfacial polymerization on top and bottom surfaces. A strong agreement between experimental data and modeling results validates the membrane design and confirms the success of model prediction. The effects of key parameters on boron rejection, such as boron permeability of both selective layers and structure parameter, were also investigated in-depth with the mathematical modeling. This study may provide insights not only for boron removal from wastewater, but also open up the design of next generation FO membranes to eliminate low-rejection molecules in wider applications. PMID:27280490
[Verification of the double dissociation model of shyness using the implicit association test].
Fujii, Tsutomu; Aikawa, Atsushi
2013-12-01
The "double dissociation model" of shyness proposed by Asendorpf, Banse, and Mtücke (2002) was demonstrated in Japan by Aikawa and Fujii (2011). However, the generalizability of the double dissociation model of shyness was uncertain. The present study examined whether the results reported in Aikawa and Fujii (2011) would be replicated. In Study 1, college students (n = 91) completed explicit self-ratings of shyness and other personality scales. In Study 2, forty-eight participants completed IAT (Implicit Association Test) for shyness, and their friends (n = 141) rated those participants on various personality scales. The results revealed that only the explicit self-concept ratings predicted other-rated low praise-seeking behavior, sociable behavior and high rejection-avoidance behavior (controlled shy behavior). Only the implicit self-concept measured by the shyness IAT predicted other-rated high interpersonal tension (spontaneous shy behavior). The results of this study are similar to the findings of the previous research, which supports generalizability of the double dissociation model of shyness. PMID:24505980
The role of convective model choice in calculating the climate impact of doubling CO2
NASA Technical Reports Server (NTRS)
Lindzen, R. S.; Hou, A. Y.; Farrell, B. F.
1982-01-01
The role of the parameterization of vertical convection in calculating the climate impact of doubling CO2 is assessed using both one-dimensional radiative-convective vertical models and in the latitude-dependent Hadley-baroclinic model of Lindzen and Farrell (1980). Both the conventional 6.5 K/km and the moist-adiabat adjustments are compared with a physically-based, cumulus-type parameterization. The model with parameterized cumulus convection has much less sensitivity than the 6.5 K/km adjustment model at low latitudes, a result that can be to some extent imitiated by the moist-adiabat adjustment model. However, when averaged over the globe, the use of the cumulus-type parameterization in a climate model reduces sensitivity only approximately 34% relative to models using 6.5 K/km convective adjustment. Interestingly, the use of the cumulus-type parameterization appears to eliminate the possibility of a runaway greenhouse.
Double-blind comparison of survival analysis models using a bespoke web system.
Taktak, A F G; Setzkorn, C; Damato, B E
2006-01-01
The aim of this study was to carry out a comparison of different linear and non-linear models from different centres on a common dataset in a double-blind manner to eliminate bias. The dataset was shared over the Internet using a secure bespoke environment called geoconda. Models evaluated included: (1) Cox model, (2) Log Normal model, (3) Partial Logistic Spline, (4) Partial Logistic Artificial Neural Network and (5) Radial Basis Function Networks. Graphical analysis of the various models with the Kaplan-Meier values were carried out in 3 survival groups in the test set classified according to the TNM staging system. The discrimination value for each model was determined using the area under the ROC curve. Results showed that the Cox model tended towards optimism whereas the partial logistic Neural Networks showed slight pessimism. PMID:17945716
Face Detection Using GPU-Based Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Nasse, Fabian; Thurau, Christian; Fink, Gernot A.
In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Schulze-Halberg, Axel E-mail: xbataxel@gmail.com; Wang, Jie
2015-07-15
We obtain series solutions, the discrete spectrum, and supersymmetric partners for a quantum double-oscillator system. Its potential features a superposition of the one-parameter Mathews-Lakshmanan interaction and a one-parameter harmonic or inverse harmonic oscillator contribution. Furthermore, our results are transferred to a generalized Pöschl-Teller model that is isospectral to the double-oscillator system.
Automatic localization of vertebrae based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie
2015-03-01
Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.
Fine-grained representation learning in convolutional autoencoders
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie
2016-03-01
Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.
A Discriminative Representation of Convolutional Features for Indoor Scene Recognition
NASA Astrophysics Data System (ADS)
Khan, Salman H.; Hayat, Munawar; Bennamoun, Mohammed; Togneri, Roberto; Sohel, Ferdous A.
2016-07-01
Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities. This paper presents a novel approach which exploits rich mid-level convolutional features to categorize indoor scenes. Traditionally used convolutional features preserve the global spatial structure, which is a desirable property for general object recognition. However, we argue that this structuredness is not much helpful when we have large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target dataset, but it also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale dataset of 1300 object categories which are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over previous state of the art approaches on five major scene classification datasets.
NASA Astrophysics Data System (ADS)
Campolina, Bruno L.
The prediction of aircraft interior noise involves the vibroacoustic modelling of the fuselage with noise control treatments. This structure is composed of a stiffened metallic or composite panel, lined with a thermal and acoustic insulation layer (glass wool), and structurally connected via vibration isolators to a commercial lining panel (trim). The goal of this work aims at tailoring the noise control treatments taking design constraints such as weight and space optimization into account. For this purpose, a representative aircraft double-wall is modelled using the Statistical Energy Analysis (SEA) method. Laboratory excitations such as diffuse acoustic field and point force are addressed and trends are derived for applications under in-flight conditions, considering turbulent boundary layer excitation. The effect of the porous layer compression is firstly addressed. In aeronautical applications, compression can result from the installation of equipment and cables. It is studied analytically and experimentally, using a single panel and a fibrous uniformly compressed over 100% of its surface. When compression increases, a degradation of the transmission loss up to 5 dB for a 50% compression of the porous thickness is observed mainly in the mid-frequency range (around 800 Hz). However, for realistic cases, the effect should be reduced since the compression rate is lower and compression occurs locally. Then the transmission through structural connections between panels is addressed using a four-pole approach that links the force-velocity pair at each side of the connection. The modelling integrates experimental dynamic stiffness of isolators, derived using an adapted test rig. The structural transmission is then experimentally validated and included in the double-wall SEA model as an equivalent coupling loss factor (CLF) between panels. The tested structures being flat, only axial transmission is addressed. Finally, the dominant sound transmission paths are
Double-stranded DNA organization in bacteriophage heads: An alternative toroid-based model
Hud, N.V.
1995-10-01
Studies of the organization of double-stranded DNA within bacteriophage heads during the past four decades have produced a wealth of data. However, despite the presentation of numerous models, the true organization of DNA within phage heads remains unresolved. The observations of toroidal DNA structures in electron micrographs of phage lysates have long been cited as support for the organization of DNA in a spool-like fashion. This particular model, like all other models, has not been found to be consistent with all available data. Recently, the authors proposed that DNA within toroidal condensates produced in vitro is organized in a manner significantly different from that suggested by the spool model. This new toroid model has allowed the development of an alternative model for DNA organization within bacteriophage heads that is consistent with a wide range of biophysical data. Here the authors propose that bacteriophage DNA is packaged in a toroid that is folded into a highly compact structure.
The tropospheric moisture and double-ITCZ biases in CMIP5 models
NASA Astrophysics Data System (ADS)
Tian, B.
2014-12-01
Based on Atmospheric Infrared Sunder (AIRS) Obs4MIPs data, Tian et al. (2013) evaluated the climatological mean tropospheric air temperature and specific humidity simulations in Phase 5 of the Coupled Model Intercomparison Project (CMIP5) models. They found that most CMIP5 models have a cold bias in the extratropical upper troposphere and a double-Intertropical Convergence Zone (ITCZ) bias in the whole troposphere over the tropical Pacific. They also pointed out the cloud-related sampling biases in the AIRS Obs4MIPs air temperature and specific humidity climatologies that were later quantified by Hearty et al. (2014). In this study, we will continue comparing the tropospheric specific humidity climatologies between the CMIP5 model simulations and the AIRS Obs4MIPs data after correcting the AIRS data sampling biases to quantify the overall tropospheric moist or dry bias of CMIP5 models. In particular, we will quantify the strength of the double-ITCZ bias in each individual CMIP5 model and discuss its possible implication for climate sensitivity and climate prediction.
Simulation of double layers in a model auroral circuit with nonlinear impedance
NASA Technical Reports Server (NTRS)
Smith, R. A.
1986-01-01
A reduced circuit description of the U-shaped potential structure of a discrete auroral arc, consisting of the flank transmission line plus parallel-electric-field region, is used to provide the boundary condition for one-dimensional simulations of the double-layer evolution. The model yields asymptotic scalings of the double-layer potential, as a function of an anomalous transport coefficient alpha and of the perpendicular length scale l(a) of the arc. The arc potential phi(DL) scales approximately linearly with alpha, and for alpha fixed phi (DL) about l(a) to the z power. Using parameters appropriate to the auroral zone acceleration region, potentials of phi (DPL) 10 kV scale to projected ionospheric dimensions of about 1 km, with power flows of the order of magnitude of substorm dissipation rates.
NASA Technical Reports Server (NTRS)
Pitari, G.; Palermi, S.; Visconti, G.; Prinn, R. G.
1992-01-01
A spectral 3D model of the stratosphere has been used to study the sensitivity of polar ozone with respect to a carbon dioxide increase. The lower stratospheric cooling associated with an imposed CO2 doubling may increase the probability of polar stratospheric cloud (PSC) formation and this affect ozone. The ozone perturbation obtained with the inclusion of a simple parameterization for heterogeneous chemistry on PSCs is compared to that relative to a pure homogeneous chemistry. In both cases the temperature perturbation is determined by a CO2 doubling, while the total chlorine content is kept at the present level. It is shown that the lower temperature may increase the depth and the extension of the ozone hole by extending the area amenable to PSC formation. It may be argued that this effect, coupled with an increasing amount of chlorine, may produce a positive feedback on the ozone destruction.
A double layer model for solar X-ray and microwave pulsations
NASA Technical Reports Server (NTRS)
Tapping, K. F.
1986-01-01
The wide range of wavelengths over which quasi-periodic pulsations have been observed suggests that the mechanism causing them acts upon the supply of high energy electrons driving the emission processes. A model is described which is based upon the radial shrinkage of a magnetic flux tube. The concentration of the current, along with the reduction in the number of available charge carriers, can rise to a condition where the current demand exceeds the capacity of the thermal electrons. Driven by the large inductance of the external current circuit, an instability takes place in the tube throat, resulting in the formation of a potential double layer, which then accelerates electrons and ions to MeV energies. The double layer can be unstable, collapsing and reforming repeatedly. The resulting pulsed particle beams give rise to pulsating emission which are observed at radio and X-ray wavelengths.
Kinetic model for an auroral double layer that spans many gravitational scale heights
Robertson, Scott
2014-12-15
The electrostatic potential profile and the particle densities of a simplified auroral double layer are found using a relaxation method to solve Poisson's equation in one dimension. The electron and ion distribution functions for the ionosphere and magnetosphere are specified at the boundaries, and the particle densities are found from a collisionless kinetic model. The ion distribution function includes the gravitational potential energy; hence, the unperturbed ionospheric plasma has a density gradient. The plasma potential at the upper boundary is given a large negative value to accelerate electrons downward. The solutions for a wide range of dimensionless parameters show that the double layer forms just above a critical altitude that occurs approximately where the ionospheric density has fallen to the magnetospheric density. Below this altitude, the ionospheric ions are gravitationally confined and have the expected scale height for quasineutral plasma in gravity.
Communication: Double-hybrid functionals from adiabatic-connection: The QIDH model
NASA Astrophysics Data System (ADS)
Brémond, Éric; Sancho-García, Juan Carlos; Pérez-Jiménez, Ángel José; Adamo, Carlo
2014-07-01
A new approach stemming from the adiabatic-connection (AC) formalism is proposed to derive parameter-free double-hybrid (DH) exchange-correlation functionals. It is based on a quadratic form that models the integrand of the coupling parameter, whose components are chosen to satisfy several well-known limiting conditions. Its integration leads to DHs containing a single parameter controlling the amount of exact exchange, which is determined by requiring it to depend on the weight of the MP2 correlation contribution. Two new parameter-free DHs functionals are derived in this way, by incorporating the non-empirical PBE and TPSS functionals in the underlying expression. Their extensive testing using the GMTKN30 benchmark indicates that they are in competition with state-of-the-art DHs, yet providing much better self-interaction errors and opening a new avenue towards the design of accurate double-hybrid exchange-correlation functionals departing from the AC integrand.
New non-equilibrium matrix imbibition equation for double porosity model
NASA Astrophysics Data System (ADS)
Konyukhov, Andrey; Pankratov, Leonid
2016-07-01
The paper deals with the global Kondaurov double porosity model describing a non-equilibrium two-phase immiscible flow in fractured-porous reservoirs when non-equilibrium phenomena occur in the matrix blocks, only. In a mathematically rigorous way, we show that the homogenized model can be represented by usual equations of two-phase incompressible immiscible flow, except for the addition of two source terms calculated by a solution to a local problem being a boundary value problem for a non-equilibrium imbibition equation given in terms of the real saturation and a non-equilibrium parameter.
NASA Astrophysics Data System (ADS)
Li, Jianglong; Zhang, Xuehong; Yu, Yongqiang; Dai, Fushan
2004-12-01
This paper investigates the processes behind the double ITCZ phenomenon, a common problem in Coupled ocean-atmosphere General Circulation Models (CGCMs), using a CGCM—FGCM-0 (Flexible General Circulation Model, version 0). The double ITCZ mode develops rapidly during the first two years of the integration and becomes a perennial phenomenon afterwards in the model. By way of Singular Value Decomposition (SVD) for SST, sea surface pressure, and sea surface wind, some air-sea interactions are analyzed. These interactions prompt the anomalous signals that appear at the beginning of the coupling to develop rapidly. There are two possible reasons, proved by sensitivity experiments: (1) the overestimated east-west gradient of SST in the equatorial Pacific in the ocean spin-up process, and (2) the underestimated amount of low-level stratus over the Peruvian coast in CCM3 (the Community Climate Model, Version Three). The overestimated east-west gradient of SST brings the anomalous equatorial easterly. The anomalous easterly, affected by the Coriolis force in the Southern Hemisphere, turns into an anomalous westerly in a broad area south of the equator and is enhanced by atmospheric anomalous circulation due to the underestimated amount of low-level stratus over the Peruvian coast simulated by CCM3. The anomalous westerly leads to anomalous warm advection that makes the SST warm in the southeast Pacific. The double ITCZ phenomenon in the CGCM is a result of a series of nonlocal and nonlinear adjustment processes in the coupled system, which can be traced to the uncoupled models, oceanic component, and atmospheric component. The zonal gradient of the equatorial SST is too large in the ocean component and the amount of low-level stratus over the Peruvian coast is too low in the atmosphere component.
Fabrication of double-walled section models of the ITER vacuum vessel
Koizumi, K.; Kanamori, N.; Nakahira, M.; Itoh, Y.; Horie, M.; Tada, E.; Shimamoto, S.
1995-12-31
Trial fabrication of double-walled section models has been performed at Japan Atomic Energy Research Institute (JAERI) for the construction of ITER vacuum vessel. By employing TIG (Tungsten-arc Inert Gas) welding and EB (Electron Beam) welding, for each model, two full-scaled section models of 7.5 {degree} toroidal sector in the curved section at the bottom of vacuum vessel have been successfully fabricated with the final dimensional error of within {+-}5 mm to the nominal values. The sufficient technical database on the candidate fabrication procedures, welding distortion and dimensional stability of full-scaled models have been obtained through the fabrications. This paper describes the design and fabrication procedures of both full-scaled section models and the major results obtained through the fabrication.
Compact model for short-channel symmetric double-gate junctionless transistors
NASA Astrophysics Data System (ADS)
Ávila-Herrera, F.; Cerdeira, A.; Paz, B. C.; Estrada, M.; Íñiguez, B.; Pavanello, M. A.
2015-09-01
In this work a compact analytical model for short-channel double-gate junctionless transistor is presented, considering variable mobility and the main short-channel effects as threshold voltage roll-off, series resistance, drain saturation voltage, channel shortening and saturation velocity. The threshold voltage shift and subthreshold slope variation is determined through the minimum value of the potential in the channel. Only eight model parameters are used. The model is physically-based, considers the total charge in the Si layer and the operating conditions in both depletion and accumulation. Model is validated by 2D simulations in ATLAS for channel lengths from 25 nm to 500 nm and for doping concentrations of 5 × 1018 and 1 × 1019 cm-3, as well as for Si layer thickness of 10 and 15 nm, in order to guarantee normally-off operation of the transistors. The model provides an accurate continuous description of the transistor behavior in all operating regions.
The convoluted evolution of snail chirality
NASA Astrophysics Data System (ADS)
Schilthuizen, M.; Davison, A.
2005-11-01
The direction that a snail (Mollusca: Gastropoda) coils, whether dextral (right-handed) or sinistral (left-handed), originates in early development but is most easily observed in the shell form of the adult. Here, we review recent progress in understanding snail chirality from genetic, developmental and ecological perspectives. In the few species that have been characterized, chirality is determined by a single genetic locus with delayed inheritance, which means that the genotype is expressed in the mother's offspring. Although research lags behind the studies of asymmetry in the mouse and nematode, attempts to isolate the loci involved in snail chirality have begun, with the final aim of understanding how the axis of left-right asymmetry is established. In nature, most snail taxa (>90%) are dextral, but sinistrality is known from mutant individuals, populations within dextral species, entirely sinistral species, genera and even families. Ordinarily, it is expected that strong frequency-dependent selection should act against the establishment of new chiral types because the chiral minority have difficulty finding a suitable mating partner (their genitalia are on the ‘wrong’ side). Mixed populations should therefore not persist. Intriguingly, however, a very few land snail species, notably the subgenus Amphidromus sensu stricto, not only appear to mate randomly between different chiral types, but also have a stable, within-population chiral dimorphism, which suggests the involvement of a balancing factor. At the other end of the spectrum, in many species, different chiral types are unable to mate and so could be reproductively isolated from one another. However, while empirical data, models and simulations have indicated that chiral reversal must sometimes occur, it is rarely likely to lead to so-called ‘single-gene’ speciation. Nevertheless, chiral reversal could still be a contributing factor to speciation (or to divergence after speciation) when
Accuracy assessment of single and double difference models for the single epoch GPS compass
NASA Astrophysics Data System (ADS)
Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian
2012-02-01
The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.
Coordinated regulation of TRPV5-mediated Ca²⁺ transport in primary distal convolution cultures.
van der Hagen, Eline A E; Lavrijsen, Marla; van Zeeland, Femke; Praetorius, Jeppe; Bonny, Olivier; Bindels, René J M; Hoenderop, Joost G J
2014-11-01
Fine-tuning of renal calcium ion (Ca(2+)) reabsorption takes place in the distal convoluted and connecting tubules (distal convolution) of the kidney via transcellular Ca(2+) transport, a process controlled by the epithelial Ca(2+) channel Transient Receptor Potential Vanilloid 5 (TRPV5). Studies to delineate the molecular mechanism of transcellular Ca(2+) transport are seriously hampered by the lack of a suitable cell model. The present study describes the establishment and validation of a primary murine cell model of the distal convolution. Viable kidney tubules were isolated from mice expressing enhanced Green Fluorescent Protein (eGFP) under the control of a TRPV5 promoter (pTRPV5-eGFP), using Complex Object Parametric Analyser and Sorting (COPAS) technology. Tubules were grown into tight monolayers on semi-permeable supports. Radioactive (45)Ca(2+) assays showed apical-to-basolateral transport rates of 13.5 ± 1.2 nmol/h/cm(2), which were enhanced by the calciotropic hormones parathyroid hormone and 1,25-dihydroxy vitamin D3. Cell cultures lacking TRPV5, generated by crossbreeding pTRPV5-eGFP with TRPV5 knockout mice (TRPV5(-/-)), showed significantly reduced transepithelial Ca(2+) transport (26 % of control), for the first time directly confirming the key role of TRPV5. Most importantly, using this cell model, a novel molecular player in transepithelial Ca(2+) transport was identified: mRNA analysis revealed that ATP-dependent Ca(2+)-ATPase 4 (PMCA4) instead of PMCA1 was enriched in isolated tubules and downregulated in TRPV5(-/-) material. Immunohistochemical stainings confirmed co-localization of PMCA4 with TRPV5 in the distal convolution. In conclusion, a novel primary cell model with TRPV5-dependent Ca(2+) transport characteristics was successfully established, enabling comprehensive studies of transcellular Ca(2+) transport. PMID:24557712
NASA Astrophysics Data System (ADS)
Ma, Shutian; Eaton, David W.
2011-05-01
Precise and accurate earthquake hypocentres are critical for various fields, such as the study of tectonic process and seismic-hazard assessment. Double-difference relocation methods are widely used and can dramatically improve the precision of event relative locations. In areas of sparse seismic network coverage, however, a significant trade-off exists between focal depth, epicentral location and the origin time. Regional depth-phase modelling (RDPM) is suitable for sparse networks and can provide focal-depth information that is relatively insensitive to uncertainties in epicentral location and independent of errors in the origin time. Here, we propose a hybrid method in which focal depth is determined using RDPM and then treated as a fixed parameter in subsequent double-difference calculations, thus reducing the size of the system of equations and increasing the precision of the hypocentral solutions. Based on examples using small earthquakes from eastern Canada and southwestern USA, we show that the application of this technique yields solutions that appear to be more robust and accurate than those obtained by standard double-difference relocation method alone.
Analytical model of LDMOS with a double step buried oxide layer
NASA Astrophysics Data System (ADS)
Yuan, Song; Duan, Baoxing; Cao, Zhen; Guo, Haijun; Yang, Yintang
2016-09-01
In this paper, a two-dimensional analytical model is established for the Buried Oxide Double Step Silicon On Insulator structure proposed by the authors. Based on the two-dimensional Poisson equation, the analytic expressions of the surface electric field and potential distributions for the device are achieved. In the BODS (Buried Oxide Double Step Silicon On Insulator) structure, the buried oxide layer thickness changes stepwise along the drift region, and the positive charge in the drift region can be accumulated at the corner of the step. These accumulated charge function as the space charge in the depleted drift region. At the same time, the electric field in the oxide layer also varies with the different drift region thickness. These variations especially the accumulated charge will modulate the surface electric field distribution through the electric field modulation effects, which makes the surface electric field distribution more uniform. As a result, the breakdown voltage of the device is improved by 30% compared with the conventional SOI structure. To verify the accuracy of the analytical model, the device simulation software ISE TCAD is utilized, the analytical values are in good agreement with the simulation results by the simulation software. That means the established two-dimensional analytical model for BODS structure is valid, and it also illustrates the breakdown voltage enhancement by the electric field modulation effect sufficiently. The established analytical models will provide the physical and mathematical basis for further analysis of the new power devices with the patterned buried oxide layer.
NASA Astrophysics Data System (ADS)
Verma, Jay Hind Kumar; Haldar, Subhasis; Gupta, R. S.; Gupta, Mridula
2015-12-01
In this paper, a physics based model has been presented for the Cylindrical Surrounding Double Gate (CSDG) Nano-wire MOSFET. The analytical model is based on the solution of 2-D Poisson's equation in a cylindrical coordinate system using super-position technique. CSDG MOSFET is a cylindrical version of double gate MOSFET which offers maximum gate controllability over the channel. It consists of an inner gate and an outer gate. These gates render effective charge control inside the channel and also provide excellent immunity to short channel effects. Surface potential and electric field for inner and outer gate are derived. The impact of channel length on electrical characteristics of CSDG MOSFET is analysed and verified using ATLAS device simulator. The model is also extended for threshold voltage modelling using extrapolation method in strong inversion region. Drain current and transconductance are compared with conventional Cylindrical Surrounding Gate (CSG) MOSFET. The excellent electrical performance makes CSDG MOSFET promising candidates to extend CMOS scaling roadmap beyond CSG MOSFET.
Double-gate junctionless transistor model including short-channel effects
NASA Astrophysics Data System (ADS)
Paz, B. C.; Ávila-Herrera, F.; Cerdeira, A.; Pavanello, M. A.
2015-05-01
This work presents a physically based model for double-gate junctionless transistors (JLTs), continuous in all operation regimes. To describe short-channel transistors, short-channel effects (SCEs), such as increase of the channel potential due to drain bias, carrier velocity saturation and mobility degradation due to vertical and longitudinal electric fields, are included in a previous model developed for long-channel double-gate JLTs. To validate the model, an analysis is made by using three-dimensional numerical simulations performed in a Sentaurus Device Simulator from Synopsys. Different doping concentrations, channel widths and channel lengths are considered in this work. Besides that, the series resistance influence is numerically included and validated for a wide range of source and drain extensions. In order to check if the SCEs are appropriately described, besides drain current, transconductance and output conductance characteristics, the following parameters are analyzed to demonstrate the good agreement between model and simulation and the SCEs occurrence in this technology: threshold voltage (VTH), subthreshold slope (S) and drain induced barrier lowering.
Guo, Hui; He, Youwei; Li, Lei; Du, Song; Cheng, Shiqing
2014-01-01
This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR). PMID:25302335
Yu, Haiyang; Guo, Hui; He, Youwei; Xu, Hainan; Li, Lei; Zhang, Tiantian; Xian, Bo; Du, Song; Cheng, Shiqing
2014-01-01
This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR). PMID:25302335
Development of a model for flaming combustion of double-wall corrugated cardboard
NASA Astrophysics Data System (ADS)
McKinnon, Mark B.
Corrugated cardboard is used extensively in a storage capacity in warehouses and frequently acts as the primary fuel for accidental fires that begin in storage facilities. A one-dimensional numerical pyrolysis model for double-wall corrugated cardboard was developed using the Thermakin modeling environment to describe the burning rate of corrugated cardboard. The model parameters corresponding to the thermal properties of the corrugated cardboard layers were determined through analysis of data collected in cone calorimeter tests conducted with incident heat fluxes in the range 20--80 kW/m 2. An apparent pyrolysis reaction mechanism and thermodynamic properties for the material were obtained using thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC). The fully-parameterized bench-scale model predicted burning rate profiles that were in agreement with the experimental data for the entire range of incident heat fluxes, with more consistent predictions at higher heat fluxes.
Convolution-based estimation of organ dose in tube current modulated CT.
Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan
2016-05-21
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ([Formula: see text]) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate [Formula: see text] values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying [Formula: see text] with the organ dose coefficients ([Formula: see text]). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. PMID:26797612
ARKCoS: artifact-suppressed accelerated radial kernel convolution on the sphere
NASA Astrophysics Data System (ADS)
Elsner, F.; Wandelt, B. D.
2011-08-01
We describe a hybrid Fourier/direct space convolution algorithm for compact radial (azimuthally symmetric) kernels on the sphere. For high resolution maps covering a large fraction of the sky, our implementation takes advantage of the inexpensive massive parallelism afforded by consumer graphics processing units (GPUs). Its applications include modeling of instrumental beam shapes in terms of compact kernels, computation of fine-scale wavelet transformations, and optimal filtering for the detection of point sources. Our algorithm works for any pixelization where pixels are grouped into isolatitude rings. Even for kernels that are not bandwidth-limited, ringing features are completely absent on an ECP grid. We demonstrate that they can be highly suppressed on the popular HEALPix pixelization, for which we develop a freely available implementation of the algorithm. As an example application, we show that running on a high-end consumer graphics card our method speeds up beam convolution for simulations of a characteristic Planck high frequency instrument channel by two orders of magnitude compared to the commonly used HEALPix implementation on one CPU core, while typically maintaining a fractional RMS accuracy of about 1 part in 105.
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612
Region-Based Convolutional Networks for Accurate Object Detection and Segmentation.
Girshick, Ross; Donahue, Jeff; Darrell, Trevor; Malik, Jitendra
2016-01-01
Object detection performance, as measured on the canonical PASCAL VOC Challenge datasets, plateaued in the final years of the competition. The best-performing methods were complex ensemble systems that typically combined multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 50 percent relative to the previous best result on VOC 2012-achieving a mAP of 62.4 percent. Our approach combines two ideas: (1) one can apply high-capacity convolutional networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data are scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, boosts performance significantly. Since we combine region proposals with CNNs, we call the resulting model an R-CNN or Region-based Convolutional Network. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn. PMID:26656583
Experimental investigation of shock wave diffraction over a single- or double-sphere model
NASA Astrophysics Data System (ADS)
Zhang, L. T.; Wang, T. H.; Hao, L. N.; Huang, B. Q.; Chen, W. J.; Shi, H. H.
2016-03-01
In this study, the unsteady drag produced by the interaction of a shock wave with a single- and a double-sphere model is measured using imbedded accelerometers. The shock wave is generated in a horizontal circular shock tube with an inner diameter of 200 mm. The effect of the shock Mach number and the dimensionless distance between spheres is investigated. The time-history of the drag coefficient is obtained based on Fast Fourier Transformation (FFT) band-block filtering and polynomial fitting of the measured acceleration. The measured peak values of the drag coefficient, with the associated uncertainty, are reported.
Numerical modeling of Subthreshold region of junctionless double surrounding gate MOSFET (JLDSG)
NASA Astrophysics Data System (ADS)
Rewari, Sonam; Haldar, Subhasis; Nath, Vandana; Deswal, S. S.; Gupta, R. S.
2016-02-01
In this paper, Numerical Model for Electric Potential, Subthreshold Current and Subthreshold Swing for Junctionless Double Surrounding Gate(JLDSG) MOSFEThas been developed using superposition method. The results have also been evaluated for different silicon film thickness, oxide film thickness and channel length. The numerical results so obtained are in good agreement with the simulated data. Also, the results of JLDSG MOSFET have been compared with the conventional Junctionless Surrounding Gate (JLSG) MOSFET and it is observed that JLDSG MOSFET has improved drain currents, transconductance, outputconductance, Transconductance Generation Factor (TGF) and Subthreshold Slope.
Double pendulum model for a tennis stroke including a collision process
NASA Astrophysics Data System (ADS)
Youn, Sun-Hyun
2015-10-01
By means of adding a collision process between the ball and racket in the double pendulum model, we analyzed the tennis stroke. The ball and the racket system may be accelerated during the collision time; thus, the speed of the rebound ball does not simply depend on the angular velocity of the racket. A higher angular velocity sometimes gives a lower rebound ball speed. We numerically showed that the proper time-lagged racket rotation increased the speed of the rebound ball by 20%. We also showed that the elbow should move in the proper direction in order to add the angular velocity of the racket.
Mistry, Piyush R; Pradhan, Vikas H; Desai, Khyati R
2013-01-01
The present paper analytically discusses the phenomenon of fingering in double phase flow through homogenous porous media by using variational iteration method. Fingering phenomenon is a physical phenomenon which occurs when a fluid contained in a porous medium is displaced by another of lesser viscosity which frequently occurred in problems of petroleum technology. In the current investigation a mathematical model is presented for the fingering phenomenon under certain simplified assumptions. An approximate analytical solution of the governing nonlinear partial differential equation is obtained using variational iteration method with the use of Mathematica software. PMID:24348161
Doubled CO2 Effects on NO(y) in a Coupled 2D Model
NASA Technical Reports Server (NTRS)
Rosenfield, J. E.; Douglass, A. R.
1998-01-01
Changes in temperature and ozone have been the main focus of studies of the stratospheric impact of doubled CO2. Increased CO2 is expected to cool the stratosphere, which will result in increases in stratospheric ozone through temperature dependent loss rates. Less attention has been paid to changes in minor constituents which affect the O3 balance and which may provide additional feedbacks. Stratospheric NO(y) fields calculated using the GSFC 2D interactive chemistry-radiation-dynamics model show significant sensitivity to the model CO2. Modeled upper stratospheric NO(y) decreases by about 15% in response to CO2 doubling, mainly due to the temperature decrease calculated to result from increased cooling. The abundance of atomic nitrogen, N, increases because the rate of the strongly temperature dependent reaction N + O2 yields NO + O decreases at lower temperatures. Increased N leads to an increase in the loss of NO(y) which is controlled by the reaction N + NO yields N2 + O. The NO(y) reduction is shown to be sensitive to the NO photolysis rate. The decrease in the O3 loss rate due to the NO(y) changes is significant when compared to the decrease in the O3 loss rate due to the temperature changes.
Simulation of the conformation and dynamics of a double-helical model for DNA.
Huertas, M L; Navarro, S; Lopez Martinez, M C; García de la Torre, J
1997-01-01
We propose a partially flexible, double-helical model for describing the conformational and dynamic properties of DNA. In this model, each nucleotide is represented by one element (bead), and the known geometrical features of the double helix are incorporated in the equilibrium conformation. Each bead is connected to a few neighbor beads in both strands by means of stiff springs that maintain the connectivity but still allow for some extent of flexibility and internal motion. We have used Brownian dynamics simulation to sample the conformational space and monitor the overall and internal dynamics of short DNA pieces, with up to 20 basepairs. From Brownian trajectories, we calculate the dimensions of the helix and estimate its persistence length. We obtain translational diffusion coefficient and various rotational relaxation times, including both overall rotation and internal motion. Although we have not carried out a detailed parameterization of the model, the calculated properties agree rather well with experimental data available for those oligomers. Images FIGURE 3 PMID:9414226
Modeling and simulation study of novel Double Gate Ferroelectric Junctionless (DGFJL) transistor
NASA Astrophysics Data System (ADS)
Mehta, Hema; Kaur, Harsupreet
2016-09-01
In this work we have proposed an analytical model for Double Gate Ferroelectric Junctionless Transistor (DGFJL), a novel device, which incorporates the advantages of both Junctionless (JL) transistor and Negative Capacitance phenomenon. A complete drain current model has been developed by using Landau-Khalatnikov equation and parabolic potential approximation to analyze device behavior in different operating regions. It has been demonstrated that DGFJL transistor acts as a step-up voltage transformer and exhibits subthreshold slope values less than 60 mV/dec. In order to assess the advantages offered by the proposed device, extensive comparative study has been done with equivalent Double Gate Junctionless Transistor (DGJL) transistor with gate insulator thickness same as ferroelectric gate stack thickness of DGFJL transistor. It is shown that incorporation of ferroelectric layer can overcome the variability issues observed in JL transistors. The device has been studied over a wide range of parameters and bias conditions to comprehensively investigate the device design guidelines to obtain a better insight into the application of DGFJL as a potential candidate for future technology nodes. The analytical results so derived from the model have been verified with simulated results obtained using ATLAS TCAD simulator and a good agreement has been found.
Convolutional neural networks for mammography mass lesion classification.
Arevalo, John; Gonzalez, Fabio A; Ramos-Pollan, Raul; Oliveira, Jose L; Guevara Lopez, Miguel Angel
2015-08-01
Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve. PMID:26736382
Convolution seal for transition duct in turbine system
Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray
2015-03-10
A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.
Convolution seal for transition duct in turbine system
Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray
2015-05-26
A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.
A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution
Walker, D.W.
1992-03-01
This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.
Visualizing Vector Fields Using Line Integral Convolution and Dye Advection
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu
1996-01-01
We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
Faster GPU-based convolutional gridding via thread coarsening
NASA Astrophysics Data System (ADS)
Merry, B.
2016-07-01
Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.
Double-layer parallelization for hydrological model calibration on HPC systems
NASA Astrophysics Data System (ADS)
Zhang, Ang; Li, Tiejian; Si, Yuan; Liu, Ronghua; Shi, Haiyun; Li, Xiang; Li, Jiaye; Wu, Xia
2016-04-01
Large-scale problems that demand high precision have remarkably increased the computational time of numerical simulation models. Therefore, the parallelization of models has been widely implemented in recent years. However, computing time remains a major challenge when a large model is calibrated using optimization techniques. To overcome this difficulty, we proposed a double-layer parallel system for hydrological model calibration using high-performance computing (HPC) systems. The lower-layer parallelism is achieved using a hydrological model, the Digital Yellow River Integrated Model, which was parallelized by decomposing river basins. The upper-layer parallelism is achieved by simultaneous hydrological simulations with different parameter combinations in the same generation of the genetic algorithm and is implemented using the job scheduling functions of an HPC system. The proposed system was applied to the upstream of the Qingjian River basin, a sub-basin of the middle Yellow River, to calibrate the model effectively by making full use of the computing resources in the HPC system and to investigate the model's behavior under various parameter combinations. This approach is applicable to most of the existing hydrology models for many applications.
Preliminary results from a four-working space, double-acting piston, Stirling engine controls model
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1980-01-01
A four working space, double acting piston, Stirling engine simulation is being developed for controls studies. The development method is to construct two simulations, one for detailed fluid behavior, and a second model with simple fluid behaviour but containing the four working space aspects and engine inertias, validate these models separately, then upgrade the four working space model by incorporating the detailed fluid behaviour model for all four working spaces. The single working space (SWS) model contains the detailed fluid dynamics. It has seven control volumes in which continuity, energy, and pressure loss effects are simulated. Comparison of the SWS model with experimental data shows reasonable agreement in net power versus speed characteristics for various mean pressure levels in the working space. The four working space (FWS) model was built to observe the behaviour of the whole engine. The drive dynamics and vehicle inertia effects are simulated. To reduce calculation time, only three volumes are used in each working space and the gas temperature are fixed (no energy equation). Comparison of the FWS model predicted power with experimental data shows reasonable agreement. Since all four working spaces are simulated, the unique capabilities of the model are exercised to look at working fluid supply transients, short circuit transients, and piston ring leakage effects.
Low-order mathematical modelling of electric double layer supercapacitors using spectral methods
NASA Astrophysics Data System (ADS)
Drummond, Ross; Howey, David A.; Duncan, Stephen R.
2015-03-01
This work investigates two physics-based models that simulate the non-linear partial differential algebraic equations describing an electric double layer supercapacitor. In one model the linear dependence between electrolyte concentration and conductivity is accounted for, while in the other model it is not. A spectral element method is used to discretise the model equations and it is found that the error convergence rate with respect to the number of elements is faster compared to a finite difference method. The increased accuracy of the spectral element approach means that, for a similar level of solution accuracy, the model simulation computing time is approximately 50% of that of the finite difference method. This suggests that the spectral element model could be used for control and state estimation purposes. For a typical supercapacitor charging profile, the numerical solutions from both models closely match experimental voltage and current data. However, when the electrolyte is dilute or where there is a long charging time, a noticeable difference between the numerical solutions of the two models is observed. Electrical impedance spectroscopy simulations show that the capacitance of the two models rapidly decreases when the frequency of the perturbation current exceeds an upper threshold.
A double hit model for the distribution of time to AIDS onset
NASA Astrophysics Data System (ADS)
Chillale, Nagaraja Rao
2013-09-01
Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.
Minimal model for double diffusion and its application to Kivu, Nyos, and Powell Lake
NASA Astrophysics Data System (ADS)
Toffolon, Marco; Wüest, Alfred; Sommer, Tobias
2015-09-01
Double diffusion originates from the markedly different molecular diffusion rates of heat and salt in water, producing staircase structures under favorable conditions. The phenomenon essentially consists of two processes: molecular diffusion across sharp interfaces and convective transport in the gravitationally unstable layers. In this paper, we propose a model that is based on the one-dimensional description of these two processes only, and—by self-organization—is able to reproduce both the large-scale dynamics and the structure of individual layers, while accounting for different boundary conditions. Two parameters characterize the model, describing the time scale for the formation of unstable water parcels and the optimal spatial resolution. Theoretical relationships allow for the identification of the influence of these parameters on the layer structure and on the mass and heat fluxes. The performances of the model are tested for three different lakes (Powell, Kivu, and Nyos), showing a remarkable agreement with actual microstructure measurements.
Impact of stray charge on interconnect wire via probability model of double-dot system
NASA Astrophysics Data System (ADS)
Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang
2016-02-01
The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).
Bilepton contributions to the neutrinoless double beta decay in the economical 3-3-1 model
Soa, D. V. Dong, P. V. Huong, T. T.; Long, H. N.
2009-05-15
A new bound of the mixing angle between charged gauge bosons (the standard-model W and the bilepton Y) in the economical 3-3-1 model is given. Possible contributions of the charged bileptons to the neutrinoless double beta (({beta}{beta}){sub 0{nu}}) decay are discussed. We show that the ({beta}{beta}){sub 0{nu}} decay in this model is due to both the Majorana
Shell-Model Calculations of Two-Nucleon Tansfer Related to Double Beta Decay
NASA Astrophysics Data System (ADS)
Brown, Alex
2013-10-01
I will discuss theoretical results for two-nucleon transfer cross sections for nuclei in the regions of 48Ca, 76Ge and 136Xe of interest for testing the wavefuntions used for the nuclear matrix elements in double-beta decay. Various reaction models are used. A simple cluster transfer model gives relative cross sections. Thompson's code Fresco with direct and sequential transfer is used for absolute cross sections. Wavefunctions are obtained in large-basis proton-neutron coupled model spaces with the code NuShellX with realistic effecive Hamiltonians such as those used for the recent results for 136Xe [M. Horoi and B. A. Brown, Phys. Rev. Lett. 110, 222502 (2013)]. I acknowledge support from NSF grant PHY-1068217.
Double ITCZ in Coupled Ocean-Atmosphere Models: From CMIP3 to CMIP5
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxiao; Liu, Hailong; Zhang, Minghua
2015-10-01
Recent progress in reducing the double Intertropical Convergence Zone bias in coupled climate models is examined based on multimodel ensembles of historical climate simulations from Phase 3 and Phase 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5). Biases common to CMIP3 and CMIP5 models include spurious precipitation maximum in the southeastern Pacific, warmer sea surface temperature (SST), weaker easterly, and stronger meridional wind divergences away from the equator relative to observations. It is found that there is virtually no improvement in all these measures from the CMIP3 ensemble to the CMIP5 ensemble models. The five best models in the two ensembles as measured by the spatial correlations are also assessed. No progress can be identified in the subensembles of the five best models from CMIP3 to CMIP5 even though more models participated in CMIP5; the biases of excessive precipitation and overestimated SST in southeastern Pacific are even worse in the CMIP5 models.
Heglund, P.J.; Nichols, J.D.; Hines, J.E.; Sauer, J.; Fallon, J.; Fallon, F.
2001-01-01
Point counts are a controversial sampling method for bird populations because the counts are not censuses, and the proportion of birds missed during counting generally is not estimated. We applied a double-observer approach to estimate detection rates of birds from point counts in Maryland, USA, and test whether detection rates differed between point counts conducted in field habitats as opposed to wooded habitats. We conducted 2 analyses. The first analysis was based on 4 clusters of counts (routes) surveyed by a single pair of observers. A series of models was developed with differing assumptions about sources of variation in detection probabilities and fit using program SURVIV. The most appropriate model was selected using Akaike's Information Criterion. The second analysis was based on 13 routes (7 woods and 6 field routes) surveyed by various observers in which average detection rates were estimated by route and compared using a t-test. In both analyses, little evidence existed for variation in detection probabilities in relation to habitat. Double-observer methods provide a reasonable means of estimating detection probabilities and testing critical assumptions needed for analysis of point counts.
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.
Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas
2016-09-01
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor. PMID:26540673
Enhancing Neutron Beam Production with a Convoluted Moderator
Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut
2014-10-01
We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.
Fluence-convolution broad-beam (FCBB) dose calculation.
Lu, Weiguo; Chen, Mingli
2010-12-01
IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization. PMID:21081826
A Mathematical Motivation for Complex-Valued Convolutional Networks.
Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur
2016-05-01
A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets. PMID:26890348
A three-dimensional statistical mechanical model of folding double-stranded chain molecules
NASA Astrophysics Data System (ADS)
Zhang, Wenbing; Chen, Shi-Jie
2001-05-01
Based on a graphical representation of intrachain contacts, we have developed a new three-dimensional model for the statistical mechanics of double-stranded chain molecules. The theory has been tested and validated for the cubic lattice chain conformations. The statistical mechanical model can be applied to the equilibrium folding thermodynamics of a large class of chain molecules, including protein β-hairpin conformations and RNA secondary structures. The application of a previously developed two-dimensional model to RNA secondary structure folding thermodynamics generally overestimates the breadth of the melting curves [S-J. Chen and K. A. Dill, Proc. Natl. Acad. Sci. U.S.A. 97, 646 (2000)], suggesting an underestimation for the sharpness of the conformational transitions. In this work, we show that the new three-dimensional model gives much sharper melting curves than the two-dimensional model. We believe that the new three-dimensional model may give much improved predictions for the thermodynamic properties of RNA conformational changes than the previous two-dimensional model.
Verilog-A implementation of a double-gate junctionless compact model for DC circuit simulations
NASA Astrophysics Data System (ADS)
Alvarado, J.; Flores, P.; Romero, S.; Ávila-Herrera, F.; González, V.; Soto-Cruz, B. S.; Cerdeira, A.
2016-07-01
A physically based model of the double-gate juntionless transistor which is capable of describing accumulation and depletion regions is implemented in Verilog-A in order to perform DC circuit simulations. Analytical description of the difference of potentials between the center and the surface of the silicon layer allows the determination of the mobile charges. Furthermore, mobility degradation, series resistance, as well as threshold voltage roll-off, drain saturation voltage, channel shortening and velocity saturation are also considered. In order to provide this model to all of the community, the implementation of this model is performed in Ngspice, which is a free circuit simulation with an ADMS interface to integrate Verilog-A models. Validation of the model implementation is done through 2D numerical simulations of transistors with 1 μ {{m}} and 40 {{nm}} silicon channel length and 1 × 1019 or 5× {10}18 {{{cm}}}-3 doping concentration of the silicon layer with 10 and 15 {{nm}} silicon thickness. Good agreement between the numerical simulated behavior and model implementation is obtained, where only eight model parameters are used.
Mitra, S.; Rocha, G.; Gorski, K. M.; Lawrence, C. R.; Huffenberger, K. M.; Eriksen, H. K.; Ashdown, M. A. J. E-mail: graca@caltech.edu E-mail: Charles.R.Lawrence@jpl.nasa.gov E-mail: h.k.k.eriksen@astro.uio.no
2011-03-15
Precise measurement of the angular power spectrum of the cosmic microwave background (CMB) temperature and polarization anisotropy can tightly constrain many cosmological models and parameters. However, accurate measurements can only be realized in practice provided all major systematic effects have been taken into account. Beam asymmetry, coupled with the scan strategy, is a major source of systematic error in scanning CMB experiments such as Planck, the focus of our current interest. We envision Monte Carlo methods to rigorously study and account for the systematic effect of beams in CMB analysis. Toward that goal, we have developed a fast pixel space convolution method that can simulate sky maps observed by a scanning instrument, taking into account real beam shapes and scan strategy. The essence is to pre-compute the 'effective beams' using a computer code, 'Fast Effective Beam Convolution in Pixel space' (FEBeCoP), that we have developed for the Planck mission. The code computes effective beams given the focal plane beam characteristics of the Planck instrument and the full history of actual satellite pointing, and performs very fast convolution of sky signals using the effective beams. In this paper, we describe the algorithm and the computational scheme that has been implemented. We also outline a few applications of the effective beams in the precision analysis of Planck data, for characterizing the CMB anisotropy and for detecting and measuring properties of point sources.
Development of kineto-dynamic quarter-car model for synthesis of a double wishbone suspension
NASA Astrophysics Data System (ADS)
Balike, K. P.; Rakheja, S.; Stiharu, I.
2011-02-01
Linear or nonlinear 2-degrees of freedom (DOF) quarter-car models have been widely used to study the conflicting dynamic performances of a vehicle suspension such as ride quality, road holding and rattle space requirements. Such models, however, cannot account for contributions due to suspension kinematics. Considering the proven simplicity and effectiveness of a quarter-car model for such analyses, this article presents the formulation of a comprehensive kineto-dynamic quarter-car model to study the kinematic and dynamic properties of a linkage suspension, and influences of linkage geometry on selected performance measures. An in-plane 2-DOF model was formulated incorporating the kinematics of a double wishbone suspension comprising an upper control arm, a lower control arm and a strut mounted on the lower control arm. The equivalent suspension and damping rates of the suspension model are analytically derived that could be employed in a conventional quarter-car model. The dynamic responses of the proposed model were evaluated under harmonic and bump/pothole excitations, idealised by positive/negative rounded pulse displacement and compared with those of the linear quarter-car model to illustrate the contributions due to suspension kinematics. The kineto-dynamic model revealed considerable variations in the wheel and damping rates, camber and wheel-track. Owing to the asymmetric kinematic behaviour of the suspension system, the dynamic responses of the kineto-dynamic model were observed to be considerably asymmetric about the equilibrium. The proposed kineto-dynamic model was subsequently applied to study the influences of links geometry in an attempt to seek reduced suspension lateral packaging space without compromising the kinematic and dynamic performances.
Hua, Lei; Quan, Chanqin
2016-01-01
The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task. PMID:27493967
Quan, Chanqin
2016-01-01
The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task. PMID:27493967
NASA Astrophysics Data System (ADS)
Ramadan, Omar Salameh
2010-03-01
Accurate and unconditionally stable finite difference time domain (FDTD) algorithm is presented for modeling electromagnetic wave propagation in double-negative (DNG) meta-material domains. The proposed algorithm is based on incorporating the Bilinear transformation technique into the FDTD implementations of Maxwell’s equations. The stability of the proposed approach is studied by combining the von Neumann method with the Routh-Huwitz criterion and it has been observed that the proposed algorithm is free from the Courant-Friedrichs-Lewy (CFL) stability limit of the conventional FDTD scheme. Furthermore, the proposed algorithm is incorporated with the split-step FDTD scheme to model two-dimensional problems. Numerical examples carried out in one and two dimensional domains are included to show the validity of the proposed algorithm.
Energy Science and Technology Software Center (ESTSC)
2007-07-09
Version 02 PRECO-2006 is a two-component exciton model code for the calculation of double differential cross sections of light particle nuclear reactions. PRECO calculates the emission of light particles (A = 1 to 4) from nuclear reactions induced by light particles on a wide variety of target nuclei. Their distribution in both energy and angle is calculated. Since it currently only considers the emission of up to two particles in any given reaction, it ismore » most useful for incident energies of 14 to 30 MeV when used as a stand-alone code. However, the preequilibrium calculations are valid up to at least around 100 MeV, and these can be used as input for more complete evaporation calculations, such as are performed in a Hauser-Feshbach model code. Finally, the production cross sections for specific product nuclides can be obtained« less
A double-layer based model of ion confinement in electron cyclotron resonance ion source
Mascali, D. Neri, L.; Celona, L.; Castro, G.; Gammino, S.; Ciavola, G.; Torrisi, G.; Università Mediterranea di Reggio Calabria, Dipartimento di Ingegneria dell’Informazione, delle Infrastrutture e dell’Energia Sostenibile, Via Graziella, I-89100 Reggio Calabria ; Sorbello, G.; Università degli Studi di Catania, Dipartimento di Ingegneria Elettrica Elettronica ed Informatica, Viale Andrea Doria 6, 95125 Catania
2014-02-15
The paper proposes a new model of ion confinement in ECRIS, which can be easily generalized to any magnetic configuration characterized by closed magnetic surfaces. Traditionally, ion confinement in B-min configurations is ascribed to a negative potential dip due to superhot electrons, adiabatically confined by the magneto-static field. However, kinetic simulations including RF heating affected by cavity modes structures indicate that high energy electrons populate just a thin slab overlapping the ECR layer, while their density drops down of more than one order of magnitude outside. Ions, instead, diffuse across the electron layer due to their high collisionality. This is the proper physical condition to establish a double-layer (DL) configuration which self-consistently originates a potential barrier; this “barrier” confines the ions inside the plasma core surrounded by the ECR surface. The paper will describe a simplified ion confinement model based on plasma density non-homogeneity and DL formation.
Explicit model for direct tunneling current in double-gate MOSFETs through a dielectric stack
NASA Astrophysics Data System (ADS)
Chaves, Ferney; Jiménez, David; Suñé, Jordi
2012-10-01
In this paper, we present an explicit compact quantum model for the direct tunneling current through dual layer SiO2/high-K dielectrics in Double Gate (DG) structures. Specifically, an explicit closed-form expression is proposed, useful to study the impact of dielectric constants and band offsets in determining the gate leakage, allowing to identify materials to construct these devices, and useful for the fast evaluation of the gate leakage in the context of electrical circuit simulators. A comparison with self-consistent numerical solution of Schrödinger-Poisson (SP) equations has been performed to demonstrate the accuracy of the model. Finally, a benchmarking test of different gate stacks have been proposed searching to fulfill the gate tunneling limits as projected by the International Technology Roadmap for Semiconductors.
NASA Astrophysics Data System (ADS)
Lachaume, R.; Berger, J.-P.
2012-07-01
Bandwidth smearing is a chromatic aberration due to the finite frequency bandwidth. In long-baseline optical interferometry terms, it is when the angular extension of the source is greater than the coherence length of the interferogram. As a consequence, separated parts of the source will contribute to fringe packets that are not fully overlapping; it is a transition from the classical interferometric regime to a double or multiple fringe packet. While studied in radio interferometry, there has been little work on the matter in the optical, where observables are measured and derived in a different manner, and are more strongly impacted by the turbulent atmosphere. We provide here the formalism and a set of usable equations to model and correct for the impact of smearing on the fringe contrast and phase, with the case of multiple stellar systems in mind. The atmosphere is briefly modeled and discussed.
Modeling nitrogen and water management effects in a wheat-maize double-cropping system.
Fang, Q; Ma, L; Yu, Q; Malone, R W; Saseendran, S A; Ahuja, L R
2008-01-01
Excessive N and water use in agriculture causes environmental degradation and can potentially jeopardize the sustainability of the system. A field study was conducted from 2000 to 2002 to study the effects of four N treatments (0, 100, 200, and 300 kg N ha(-1) per crop) on a wheat (Triticum aestivum L.) and maize (Zea mays L.) double cropping system under 70 +/- 15% field capacity in the North China Plain (NCP). The root zone water quality model (RZWQM), with the crop estimation through resource and environment synthesis (CERES) plant growth modules incorporated, was evaluated for its simulation of crop production, soil water, and N leaching in the double cropping system. Soil water content, biomass, and grain yield were better simulated with normalized root mean square errors (NRMSE, RMSE divided by mean observed value) from 0.11 to 0.15 than soil NO(3)-N and plant N uptake that had NRMSE from 0.19 to 0.43 across these treatments. The long-term simulation with historical weather data showed that, at 200 kg N ha(-1) per crop application rate, auto-irrigation triggered at 50% of the field capacity and recharged to 60% field capacity in the 0- to 50-cm soil profile were adequate for obtaining acceptable yield levels in this intensified double cropping system. Results also showed potential savings of more than 30% of the current N application rates per crop from 300 to 200 kg N ha(-1), which could reduce about 60% of the N leaching without compromising crop yields. PMID:18948476
A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures
NASA Astrophysics Data System (ADS)
Youssef, Rasha M.; Maher, Hadir M.
2008-10-01
A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.
Tectonic and petrologic evolution of the Western Mediterranean: the double polarity subduction model
NASA Astrophysics Data System (ADS)
Melchiorre, Massimiliano; Vergés, Jaume; Fernàndez, Manel; Torné, Montserrat; Casciello, Emilio
2016-04-01
The geochemical composition of the mantle beneath the Mediterranean area is extremely heterogeneous. This feature results in volcanic products whose geochemical features in some cases do not correspond to the geodynamic environment in which they are sampled and that is observed at present day. The subduction-related models that have been developed during the last decades to explain the evolution of the Western Mediterranean are mainly based on geologic and seismologic evidences, as well as petrography and age of exhumation of the metamorphic units that compose the inner parts of the different arcs. Except few cases, most of these models are poorly constrained from a petrologic point of view. Usually the volcanic activity that affected the Mediterranean area since Oligocene has been only used as a corollary, and not as a key constrain. This choice is strictly related to the great geochemical variability of the volcanic products erupted in the Western Mediterranean, due to events of long-term recycling affecting the mantle beneath the Mediterranean since the Variscan Orogeny, together with depletion episodes due to partial melting. We consider an evolutionary scenario for the Western Mediterranean based on a double polarity subduction model according to which two opposite slabs separated by a transform fault of the original Jurassic rift operated beneath the Western and Central Mediterranean. Our aim has been to reconstruct the evolution of the Western Mediterranean since the Oligocene considering the volcanic activity that affected this area since ~30 Ma and supporting the double polarity subduction model with the petrology of the erupted rocks.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy. PMID:24710398
A Hybrid Double-Layer Master-Slave Model For Multicore-Node Clusters
NASA Astrophysics Data System (ADS)
Liu, Gang; Schmider, Hartmut; Edgecombe, Kenneth E.
2012-10-01
The Double-Layer Master-Slave Model (DMSM) is a suitable hybrid model for executing a workload that consists of multiple independent tasks of varying length on a cluster consisting of multicore nodes. In this model, groups of individual tasks are first deployed to the cluster nodes through an MPI based Master-Slave model. Then, each group is processed by multiple threads on the node through an OpenMP based All-Slave approach. The lack of thread safety of most MPI libraries has to be addressed by a judicious use of OpenMP critical regions and locks. The HPCVL DMSM Library implements this model in Fortran and C. It requires a minimum of user input to set up the framework for the model and to define the individual tasks. Optionally, it supports the dynamic distribution of task-related data and the collection of results at runtime. This library is freely available as source code. Here, we outline the working principles of the library and on a few examples demonstrate its capability to efficiently distribute a workload on a distributed-memory cluster with shared-memory nodes.
NASA Astrophysics Data System (ADS)
>Oon Kheng Heong,
2013-06-01
There are various types of UWB antennas can be used to scavenge energy from the air and one of them is the printed disc monopole antenna. One of the new challenges imposed on ultra wideband is the design of a generalized antenna circuit model. It is developed in order to extract the inductance and capacitance values of the UWB antennas. In this research work, the developed circuit model can be used to represent the rectangular printed disc monopole antenna with double steps. The antenna structure is simulated with CST Microwave Studio, while the circuit model is simulated with AWR Microwave Office. In order to ensure the simulation result from the circuit model is accurate, the circuit model is also simulated using Mathlab program. The developed circuit model is found to be able to depict the actual UWB antenna. Energy harvesting from environmental wirelessly is an emerging method, which forms a promising alternative to existing energy scavenging system. The developed UWB can be used to scavenge wideband energy from electromagnetic wave present in the environment.
Tsehaye, Iyob; Jones, Michael L.; Irwin, Brian J.; Fielder, David G.; Breck, James E.; Luukkonen, David R.
2015-01-01
The proliferation of double-crested cormorants (DCCOs; Phalacrocorax auritus) in North America has raised concerns over their potential negative impacts on game, cultured and forage fishes, island and terrestrial resources, and other colonial water birds, leading to increased public demands to reduce their abundance. By combining fish surplus production and bird functional feeding response models, we developed a deterministic predictive model representing bird–fish interactions to inform an adaptive management process for the control of DCCOs in multiple colonies in Michigan. Comparisons of model predictions with observations of changes in DCCO numbers under management measures implemented from 2004 to 2012 suggested that our relatively simple model was able to accurately reconstruct past DCCO population dynamics. These comparisons helped discriminate among alternative parameterizations of demographic processes that were poorly known, especially site fidelity. Using sensitivity analysis, we also identified remaining critical uncertainties (mainly in the spatial distributions of fish vs. DCCO feeding areas) that can be used to prioritize future research and monitoring needs. Model forecasts suggested that continuation of existing control efforts would be sufficient to achieve long-term DCCO control targets in Michigan and that DCCO control may be necessary to achieve management goals for some DCCO-impacted fisheries in the state. Finally, our model can be extended by accounting for parametric or ecological uncertainty and including more complex assumptions on DCCO–fish interactions as part of the adaptive management process.
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Liu, D. H.
1981-01-01
The stress distribution in two hole connectors in a double lap joint configuration was studied. The following steps are described: (1) fabrication of photoelastic models of double lap double hole joints designed to determine the stresses in the inner lap; (2) assessment of the effects of joint geometry on the stresses in the inner lap; and (3) quantification of differences in the stresses near the two holes. The two holes were on the centerline of the joint and the joints were loaded in tension, parallel to the centerline. Acrylic slip fit pins through the holes served as fasteners. Two dimensional transmission photoelastic models were fabricated by using transparent acrylic outer laps and a photoelastic model material for the inner laps. It is concluded that the photoelastic fringe patterns which are visible when the models are loaded are due almost entirely to stresses in the inner lap.
Double point source W-phase inversion: Real-time implementation and automated model selection
NASA Astrophysics Data System (ADS)
Nealy, Jennifer L.; Hayes, Gavin P.
2015-12-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Communication: double-hybrid functionals from adiabatic-connection: the QIDH model.
Brémond, Éric; Sancho-García, Juan Carlos; Pérez-Jiménez, Ángel José; Adamo, Carlo
2014-07-21
A new approach stemming from the adiabatic-connection (AC) formalism is proposed to derive parameter-free double-hybrid (DH) exchange-correlation functionals. It is based on a quadratic form that models the integrand of the coupling parameter, whose components are chosen to satisfy several well-known limiting conditions. Its integration leads to DHs containing a single parameter controlling the amount of exact exchange, which is determined by requiring it to depend on the weight of the MP2 correlation contribution. Two new parameter-free DHs functionals are derived in this way, by incorporating the non-empirical PBE and TPSS functionals in the underlying expression. Their extensive testing using the GMTKN30 benchmark indicates that they are in competition with state-of-the-art DHs, yet providing much better self-interaction errors and opening a new avenue towards the design of accurate double-hybrid exchange-correlation functionals departing from the AC integrand. PMID:25053294
Deformed shell model results for neutrinoless double beta decay of nuclei in A = 60 - 90 region
NASA Astrophysics Data System (ADS)
Sahu, R.; Kota, V. K. B.
2015-03-01
Nuclear transition matrix elements (NTME) for the neutrinoless double beta decay (Oνββ or OνDBD) of 70Zn, 80Se and 82Se nuclei are calculated within the framework of the deformed shell model (DSM) based on Hartree-Fock (HF) states. For 70Zn, jj44b interaction in 2p3/2, 1f5/2, 2p1/2 and 1g9/2 space with 56Ni as the core is employed. However, for 80Se and 82Se nuclei, a modified Kuo interaction with the above core and model space are employed. Most of our calculations in this region were performed with this effective interaction. However, jj44b interaction has been found to be better for 70Zn. The above model space was used in many recent shell model (SM) and interacting boson model (IBM) calculations for nuclei in this region. After ensuring that DSM gives good description of the spectroscopic properties of low-lying levels in these three nuclei considered, the NTME are calculated. The deduced half-lives with these NTME, assuming neutrino mass is 1 eV, are 1.1 × 1026, 2.3 × 1027 and 2.2 × 1024 yr for 70Zn, 80Se and 82Se, respectively.
A 2-D semi-analytical model of double-gate tunnel field-effect transistor
NASA Astrophysics Data System (ADS)
Huifang, Xu; Yuehua, Dai; Ning, Li; Jianbin, Xu
2015-05-01
A 2-D semi-analytical model of double gate (DG) tunneling field-effect transistor (TFET) is proposed. By aid of introducing two rectangular sources located in the gate dielectric layer and the channel, the 2-D Poisson equation is solved by using a semi-analytical method combined with an eigenfunction expansion method. The expression of the surface potential is obtained, which is a special function for the infinite series expressions. The influence of the mobile charges on the potential profile is taken into account in the proposed model. On the basis of the potential profile, the shortest tunneling length and the average electrical field can be derived, and the drain current is then constructed by using Kane's model. In particular, the changes of the tunneling parameters Ak and Bk influenced by the drain—source voltage are also incorporated in the predicted model. The proposed model shows a good agreement with TCAD simulation results under different drain—source voltages, silicon film thicknesses, gate dielectric layer thicknesses, and gate dielectric layer constants. Therefore, it is useful to optimize the DG TFET and this provides a physical insight for circuit level design. Project supported by the National Natural Science Foundation of China (No. 61376106) and the Graduate Innovation Fund of Anhui University.
Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M
2016-05-01
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks. PMID:26886976
A GENERAL CIRCULATION MODEL FOR GASEOUS EXOPLANETS WITH DOUBLE-GRAY RADIATIVE TRANSFER
Rauscher, Emily; Menou, Kristen
2012-05-10
We present a new version of our code for modeling the atmospheric circulation on gaseous exoplanets, now employing a 'double-gray' radiative transfer scheme, which self-consistently solves for fluxes and heating throughout the atmosphere, including the emerging (observable) infrared flux. We separate the radiation into infrared and optical components, each with its own absorption coefficient, and solve standard two-stream radiative transfer equations. We use a constant optical absorption coefficient, while the infrared coefficient can scale as a power law with pressure; however, for simplicity, the results shown in this paper use a constant infrared coefficient. Here we describe our new code in detail and demonstrate its utility by presenting a generic hot Jupiter model. We discuss issues related to modeling the deepest pressures of the atmosphere and describe our use of the diffusion approximation for radiative fluxes at high optical depths. In addition, we present new models using a simple form for magnetic drag on the atmosphere. We calculate emitted thermal phase curves and find that our drag-free model has the brightest region of the atmosphere offset by {approx}12 Degree-Sign from the substellar point and a minimum flux that is 17% of the maximum, while the model with the strongest magnetic drag has an offset of only {approx}2 Degree-Sign and a ratio of 13%. Finally, we calculate rates of numerical loss of kinetic energy at {approx}15% for every model except for our strong-drag model, where there is no measurable loss; we speculate that this is due to the much decreased wind speeds in that model.
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C.; Mason, John J.
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
NASA Astrophysics Data System (ADS)
Devianto, Dodi
2016-02-01
It is constructed convolution of generated random variable from independent and identically exponential distribution with stabilizer constant. The characteristic function of this distribution is obtained by using Laplace-Stieltjes transform. The uniform continuity property of characteristic function from this convolution is obtained by using analytical methods as basic properties.
NASA Astrophysics Data System (ADS)
Sammons, Daniel; Winfree, William P.; Burke, Eric; Ji, Shuiwang
2016-02-01
Nondestructive evaluation (NDE) utilizes a variety of techniques to inspect various materials for defects without causing changes to the material. X-ray computed tomography (CT) produces large volumes of three dimensional image data. Using the task of identifying delaminations in carbon fiber reinforced polymer (CFRP) composite CT, this work shows that it is possible to automate the analysis of these large volumes of CT data using a machine learning model known as a convolutional neural network (CNN). Further, tests on simulated data sets show that with a robust set of experimental data, it may be possible to go beyond just identification and instead accurately characterize the size and shape of the delaminations with CNNs.
Imaging in scattering media using correlation image sensors and sparse convolutional coding.
Heide, Felix; Xiao, Lei; Kolb, Andreas; Hullin, Matthias B; Heidrich, Wolfgang
2014-10-20
Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity. PMID:25401666
An Efficient Robust Eye Localization by Learning the Convolution Distribution Using Eye Template
Li, Xuan; Dou, Yong; Niu, Xin; Xu, Jiaqing; Xiao, Ruorong
2015-01-01
Eye localization is a fundamental process in many facial analyses. In practical use, it is often challenged by illumination, head pose, facial expression, occlusion, and other factors. It remains great difficulty to achieve high accuracy with short prediction time and low training cost at the same time. This paper presents a novel eye localization approach which explores only one-layer convolution map by eye template using a BP network. Results showed that the proposed method is robust to handle many difficult situations. In experiments, accuracy of 98% and 96%, respectively, on the BioID and LFPW test sets could be achieved in 10 fps prediction rate with only 15-minute training cost. In comparison with other robust models, the proposed method could obtain similar best results with greatly reduced training time and high prediction speed. PMID:26504460
Double proton transfer dynamics of model DNA base pairs in the condensed phase
Kwon, Oh-Hoon; Zewail, Ahmed H.
2007-01-01
The dynamics of excited-state double proton transfer of model DNA base pairs, 7-azaindole dimers, is reported using femtosecond fluorescence spectroscopy. To elucidate the nature of the transfer in the condensed phase, here we examine variation of solvent polarity and viscosity, solute concentration, and isotopic fractionation. The rate of proton transfer is found to be significantly dependent on polarity and on the isotopic composition in the pair. Consistent with a stepwise mechanism, the results support the presence of an ionic intermediate species which forms on the femtosecond time scale and decays to the final tautomeric form on the picosecond time scale. We discuss the results in relation to the molecular motions involved and comment on recent claims of concerted transfer in the condensed phase. The nonconcerted mechanism is in agreement with previous isolated-molecule femtosecond dynamics and is also consistent with the most-recent high-level theoretical study on the same pair. PMID:17502610
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
Full coupled cluster singles, doubles and triples model for the description of electron correlation
Hoffmann, M.R.
1984-10-01
Equations for the determination of the cluster coefficients in a full coupled cluster theory involving single, double and triple cluster operators with respect to an independent particle reference, expressible as a single determinant of spin-orbitals, are derived. The resulting wave operator is full, or untruncated, consistant with the choice of cluster operator truncation and the requirements of the connected cluster theorem. A time-independent diagrammatic approach, based on second quantization and the Wick theorem, is employed. Final equations are presented that avoid the construction of rank three intermediary tensors. The model is seen to be a computationally viable, size-extensive, high-level description of electron correlation in small polyatomic molecules.
S-model calculations for high-energy-electron-impact double ionization of helium
NASA Astrophysics Data System (ADS)
Gasaneo, G.; Mitnik, D. M.; Randazzo, J. M.; Ancarani, L. U.; Colavecchia, F. D.
2013-04-01
In this paper the double ionization of helium by high-energy electron impact is studied. The corresponding four-body Schrödinger equation is transformed into a set of driven equations containing successive orders in the projectile-target interaction. The transition amplitude obtained from the asymptotic limit of the first-order solution is shown to be equivalent to the familiar first Born approximation. The first-order driven equation is solved within a generalized Sturmian approach for an S-wave (e,3e) model process with high incident energy and small momentum transfer corresponding to published measurements. Two independent numerical implementations, one using spherical and the other hyperspherical coordinates, yield mutual agreement. From our ab initio solution, the transition amplitude is extracted, and single differential cross sections are calculated and could be taken as benchmark values to test other numerical methods in a previously unexplored energy domain.
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. PMID:24309506
Suzuki, Yasuyuki; Nomura, Taishin; Morasso, Pietro
2011-01-01
Recent debate about neural mechanisms for stabilizing human upright quiet stance focuses on whether the active and time delay neural feedback control generating muscle torque is continuous or intermittent. A single inverted pendulum controlled by the active torque actuating the ankle joint has often been used for the debate on the presumption of well-known ankle strategy hypothesis claiming that the upright quiet stance can be stabilized mostly by the ankle torque. However, detailed measurements are showing that the hip joint angle exhibits amount of fluctuations comparable with the ankle joint angle during natural postural sway. Here we analyze a double inverted pendulum model during human quiet stance to demonstrate that the conventional proportional and derivative delay feedback control, i.e., the continuous delay PD control with gains in the physiologically plausible range is far from adequate as the neural mechanism for stabilizing human upright quiet stance. PMID:22256061
Scattering theory for the radial H˙1/2-critical wave equation with a cubic convolution
NASA Astrophysics Data System (ADS)
Miao, Changxing; Zhang, Junyong; Zheng, Jiqiang
2015-12-01
In this paper, we study the global well-posedness and scattering for the wave equation with a cubic convolution ∂t2 u - Δu = ± (| x | - 3 *| u | 2) u in dimensions d ≥ 4. We prove that if the radial solution u with life-span I obeys (u, ut) ∈ Lt∞ (I ; H˙x 1 / 2 (Rd) × H˙x - 1 / 2 (Rd)), then u is global and scatters. By the strategy derived from concentration compactness, we show that the proof of the global well-posedness and scattering is reduced to disprove the existence of two scenarios: soliton-like solution and high to low frequency cascade. Making use of the No-waste Duhamel formula and double Duhamel trick, we deduce that these two scenarios enjoy the additional regularity by the bootstrap argument of [7]. This together with virial analysis implies the energy of such two scenarios is zero and so we get a contradiction.
The effect of whitening transformation on pooling operations in convolutional autoencoders
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua
2015-12-01
Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.
Generalized Viterbi algorithms for error detection with convolutional codes
NASA Astrophysics Data System (ADS)
Seshadri, N.; Sundberg, C.-E. W.
Presented are two generalized Viterbi algorithms (GVAs) for the decoding of convolutional codes. They are a parallel algorithm that simultaneously identifies the L best estimates of the transmitted sequence, and a serial algorithm that identifies the lth best estimate using the knowledge about the previously found l-1 estimates. These algorithms are applied to combined speech and channel coding systems, concatenated codes, trellis-coded modulation, partial response (continuous-phase modulation), and hybrid ARQ (automatic repeat request) schemes. As an example, for a concatenated code more than 2 dB is gained by the use of the GVA with L = 3 over the Viterbi algorithm for block error rates less than 10-2. The channel is a Rayleigh fading channel.
Tomography by iterative convolution - Empirical study and application to interferometry
NASA Technical Reports Server (NTRS)
Vest, C. M.; Prikryl, I.
1984-01-01
An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.
Plane-wave decomposition by spherical-convolution microphone array
NASA Astrophysics Data System (ADS)
Rafaely, Boaz; Park, Munhum
2001-05-01
Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.
Visualization of vasculature with convolution surfaces: method, validation and evaluation.
Oeltze, Steffen; Preim, Bernhard
2005-04-01
We present a method for visualizing vasculature based on clinical computed tomography or magnetic resonance data. The vessel skeleton as well as the diameter information per voxel serve as input. Our method adheres to these data, while producing smooth transitions at branchings and closed, rounded ends by means of convolution surfaces. We examine the filter design with respect to irritating bulges, unwanted blending and the correct visualization of the vessel diameter. The method has been applied to a large variety of anatomic trees. We discuss the validation of the method by means of a comparison to other visualization methods. Surface distance measures are carried out to perform a quantitative validation. Furthermore, we present the evaluation of the method which has been accomplished on the basis of a survey by 11 radiologists and surgeons. PMID:15822811
Finding the complete path and weight enumerators of convolutional codes
NASA Technical Reports Server (NTRS)
Onyszchuk, I.
1990-01-01
A method for obtaining the complete path enumerator T(D, L, I) of a convolutional code is described. A system of algebraic equations is solved, using a new algorithm for computing determinants, to obtain T(D, L, I) for the (7,1/2) NASA standard code. Generating functions, derived from T(D, L, I) are used to upper bound Viterbi decoder error rates. This technique is currently feasible for constraint length K less than 10 codes. A practical, fast algorithm is presented for computing the leading nonzero coefficients of the generating functions used to bound the performance of constraint length K less than 20 codes. Code profiles with about 50 nonzero coefficients are obtained with this algorithm for the experimental K = 15, rate 1/4, code in the Galileo mission and for the proposed K = 15, rate 1/6, 2-dB code.
Drug-Drug Interaction Extraction via Convolutional Neural Networks
Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong
2016-01-01
Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831
Highly parallel vector visualization using line integral convolution
Cabral, B.; Leedom, C.
1995-12-01
Line Integral Convolution (LIC) is an effective imaging operator for visualizing large vector fields. It works by blurring an input image along local vector field streamlines yielding an output image. LIC is highly parallelizable because it uses only local read-sharing of input data and no write-sharing of output data. Both coarse- and fine-grained implementations have been developed. The coarse-grained implementation uses a straightforward row-tiling of the vector field to parcel out work to multiple CPUs. The fine-grained implementation uses a series of image warps and sums to compute the LIC algorithm across the entire vector field at once. This is accomplished by novel use of high-performance graphics hardware texture mapping and accumulation buffers.
Enhanced Line Integral Convolution with Flow Feature Detection
NASA Technical Reports Server (NTRS)
Lane, David; Okada, Arthur
1996-01-01
The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.
Deep convolutional neural networks for ATR from SAR imagery
NASA Astrophysics Data System (ADS)
Morgan, David A. E.
2015-05-01
Deep architectures for classification and representation learning have recently attracted significant attention within academia and industry, with many impressive results across a diverse collection of problem sets. In this work we consider the specific application of Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) data from the MSTAR public release data set. The classification performance achieved using a Deep Convolutional Neural Network (CNN) on this data set was found to be competitive with existing methods considered to be state-of-the-art. Unlike most existing algorithms, this approach can learn discriminative feature sets directly from training data instead of requiring pre-specification or pre-selection by a human designer. We show how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target and discuss potential practical applications.
Invariant Descriptor Learning Using a Siamese Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Chen, L.; Rottensteiner, F.; Heipke, C.
2016-06-01
In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.
Asymptotic expansions of Mellin convolution integrals: An oscillatory case
NASA Astrophysics Data System (ADS)
López, José L.; Pagola, Pedro
2010-01-01
In a recent paper [J.L. López, Asymptotic expansions of Mellin convolution integrals, SIAM Rev. 50 (2) (2008) 275-293], we have presented a new, very general and simple method for deriving asymptotic expansions of for small x. It contains Watson's Lemma and other classical methods, Mellin transform techniques, McClure and Wong's distributional approach and the method of analytic continuation used in this approach as particular cases. In this paper we generalize that idea to the case of oscillatory kernels, that is, to integrals of the form , with c[set membership, variant]R, and we give a method as simple as the one given in the above cited reference for the case c=0. We show that McClure and Wong's distributional approach for oscillatory kernels and the summability method for oscillatory integrals are particular cases of this method. Some examples are given as illustration.
Convolutional Neural Networks for patient-specific ECG classification.
Kiranyaz, Serkan; Ince, Turker; Hamila, Ridha; Gabbouj, Moncef
2015-08-01
We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB). PMID:26736826
Drug-Drug Interaction Extraction via Convolutional Neural Networks.
Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong
2016-01-01
Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831
Modeling split gate tunnel barriers in lateral double top gated Si-MOS nanostructures
NASA Astrophysics Data System (ADS)
Shirkhorshidian, Amir; Bishop, Nathaniel; Young, Ralph; Wendt, Joel; Lilly, Michael; Carroll, Malcolm
2012-02-01
Reliable interpretation of quantum dot and donor transport experiments depends critically on understanding the tunnel barriers separating the localized electron state from the 2DEG regions which serve as source and drain. We analyze transport measurements through split gate point contacts, defined in a double gate enhancement mode Si-MOS device structure. We use a square barrier WKB model which accounts for barrier height dependence on applied voltage. This constant interaction model is found to produce a self-consistent characterization of barrier height and width over a wide range of applied source-drain and gate bias. The model produces similar results for many different split gate structures. We discuss this models potential for mapping between experiment and barrier simulations. This work was performed, in part, at the Center for Integrated Nanotechnologies, a U.S. DOE, Office of Basic Energy Sciences user facility. The work was supported by the Sandia National Laboratories Directed Research and Development Program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Prediction of orbiter RSI tile gap heating ratios from NASA/Ames double wedge model test
NASA Technical Reports Server (NTRS)
1978-01-01
In-depth gap heating ratios for Orbiter RSI tile sidewalls were predicted based on near steady state temperature measurements obtained from double wedge model tests. An analysis was performed to derive gap heating ratios which would result in the best fit of test data; provide an assessment of open gap response, and supply the definition of gap filler requirements on the Orbiter. A comparison was made of these heating ratios with previously derived ratios in order to verify the extrapolation of the wing glove data to Orbiter flight conditions. The analysis was performed with the Rockwell TPS Multidimensional Heat Conduction Program for a 3-D, 2.0-inch thick flat RSI tile with 255 nodal points. The data from 14 tests was used to correlate with the analysis. The results show that the best-fit heating ratios at the station farthest upstream on the model for most gap depths were less than the extrapolated values of the wing glove model heating ratios. For the station farthest downstream on the model, the baseline heating ratios adequately predicted or over-predicted the test data.
Self-Gravitating Eccentric Disk Models for the Double Nucleus of M31
NASA Astrophysics Data System (ADS)
Salow, Robert M.; Statler, Thomas S.
2004-08-01
We present new dynamical models of weakly self-gravitating, finite dispersion eccentric stellar disks around central black holes for the double nucleus of M31. The disk is fixed in a frame rotating at constant precession speed and is populated by stars on quasi-periodic orbits whose parents are numerically integrated periodic orbits in the total potential. A distribution of quasi-periodic orbits about a given parent is approximated by a distribution of Kepler orbits dispersed in eccentricity and orientation, using an approximate phase-space distribution function written in terms of the integrals of motion in the Kepler problem. We use these models, along with an optimization routine, to fit available published kinematics and photometry in the inner 2" of the nucleus. A grid of 24 best-fit models is computed to accurately constrain the mass of the central black hole and nuclear disk parameters. We find that the supermassive black hole in M31 has mass MBH=5.62+/-0.66×107 Msolar, which is consistent with the observed correlation between the central black hole mass and the velocity dispersion of its host spheroid. Our models precess rapidly, at Ω=36.5+/-4.2 km s-1 pc-1, and possess a characteristic radial eccentricity distribution, which gives rise to multimodal line-of-sight velocity distributions along lines of sight near the black hole. These features can be used as sensitive discriminants of disk structure.
Resampling of data between arbitrary grids using convolution interpolation.
Rasche, V; Proksa, R; Sinkus, R; Börnert, P; Eggers, H
1999-05-01
For certain medical applications resampling of data is required. In magnetic resonance tomography (MRT) or computer tomography (CT), e.g., data may be sampled on nonrectilinear grids in the Fourier domain. For the image reconstruction a convolution-interpolation algorithm, often called gridding, can be applied for resampling of the data onto a rectilinear grid. Resampling of data from a rectilinear onto a nonrectilinear grid are needed, e.g., if projections of a given rectilinear data set are to be obtained. In this paper we introduce the application of the convolution interpolation for resampling of data from one arbitrary grid onto another. The basic algorithm can be split into two steps. First, the data are resampled from the arbitrary input grid onto a rectilinear grid and second, the rectilinear data is resampled onto the arbitrary output grid. Furthermore, we like to introduce a new technique to derive the sampling density function needed for the first step of our algorithm. For fast, sampling-pattern-independent determination of the sampling density function the Voronoi diagram of the sample distribution is calculated. The volume of the Voronoi cell around each sample is used as a measure for the sampling density. It is shown that the introduced resampling technique allows fast resampling of data between arbitrary grids. Furthermore, it is shown that the suggested approach to derive the sampling density function is suitable even for arbitrary sampling patterns. Examples are given in which the proposed technique has been applied for the reconstruction of data acquired along spiral, radial, and arbitrary trajectories and for the fast calculation of projections of a given rectilinearly sampled image. PMID:10416800
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.
He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian
2015-09-01
Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition. PMID:26353135
Yang, Zhixin; Wang, Shaowei; Zhao, Moli; Li, Shucai; Zhang, Qiangyong
2013-01-01
The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer is studied when the fluid and solid phase are not in local thermal equilibrium. The modified Darcy model is used for the momentum equation and a two-field model is used for energy equation each representing the fluid and solid phases separately. The effect of thermal non-equilibrium on the onset of double diffusive convection is discussed. The critical Rayleigh number and the corresponding wave number for the exchange of stability and over-stability are obtained, and the onset criterion for stationary and oscillatory convection is derived analytically and discussed numerically. PMID:24312193
NASA Astrophysics Data System (ADS)
Medina, Tait Runnfeldt
The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and
NASA Astrophysics Data System (ADS)
Chobanyan, E.; Ilić, M. M.; Notaroš, B. M.
2015-05-01
A novel double-higher-order entire-domain volume integral equation (VIE) technique for efficient analysis of electromagnetic structures with continuously inhomogeneous dielectric materials is presented. The technique takes advantage of large curved hexahedral discretization elements—enabled by double-higher-order modeling (higher-order modeling of both the geometry and the current)—in applications involving highly inhomogeneous dielectric bodies. Lagrange-type modeling of an arbitrary continuous variation of the equivalent complex permittivity of the dielectric throughout each VIE geometrical element is implemented, in place of piecewise homogeneous approximate models of the inhomogeneous structures. The technique combines the features of the previous double-higher-order piecewise homogeneous VIE method and continuously inhomogeneous finite element method (FEM). This appears to be the first implementation and demonstration of a VIE method with double-higher-order discretization elements and conformal modeling of inhomogeneous dielectric materials embedded within elements that are also higher (arbitrary) order (with arbitrary material-representation orders within each curved and large VIE element). The new technique is validated and evaluated by comparisons with a continuously inhomogeneous double-higher-order FEM technique, a piecewise homogeneous version of the double-higher-order VIE technique, and a commercial piecewise homogeneous FEM code. The examples include two real-world applications involving continuously inhomogeneous permittivity profiles: scattering from an egg-shaped melting hailstone and near-field analysis of a Luneburg lens, illuminated by a corrugated horn antenna. The results show that the new technique is more efficient and ensures considerable reductions in the number of unknowns and computational time when compared to the three alternative approaches.
Global/Regional Integrated Model System (GRIMs): Double Fourier Series (DFS) Dynamical Core
NASA Astrophysics Data System (ADS)
Koo, M.; Hong, S.
2013-12-01
A multi-scale atmospheric/oceanic model system with unified physics, the Global/Regional Integrated Model system (GRIMs) has been created for use in numerical weather prediction, seasonal simulations, and climate research projects, from global to regional scales. It includes not only the model code, but also the test cases and scripts. The model system is developed and practiced by taking advantage of both operational and research applications. We outlines the history of GRIMs, its current applications, and plans for future development, providing a summary useful to present and future users. In addition to the traditional spherical harmonics (SPH) dynamical core, a new spectral method with a double Fourier series (DFS) is available in the GRIMs (Table 1). The new DFS dynamical core with full physics is evaluated against the SPH dynamical core in terms of short-range forecast capability for a heavy rainfall event and seasonal simulation framework. Comparison of the two dynamical cores demonstrates that the new DFS dynamical core exhibits performance comparable to the SPH in terms of simulated climatology accuracy and the forecast of a heavy rainfall event. Most importantly, the DFS algorithm guarantees improved computational efficiency in the cluster computer as the model resolution increases, which is consistent with theoretical values computed from the dry primitive equation model framework of Cheong (Fig. 1). The current study shows that, at higher resolutions, the DFS approach can be a competitive dynamical core because the DFS algorithm provides the advantages of both the spectral method for high numerical accuracy and the grid-point method for high performance computing in the aspect of computational cost. GRIMs dynamical cores
Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks
Fu, Jun-Song; Liu, Yun
2015-01-01
Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211
A unified model of coupled arc plasma and weld pool for double electrodes TIG welding
NASA Astrophysics Data System (ADS)
Wang, Xinxin; Fan, Ding; Huang, Jiankang; Huang, Yong
2014-07-01
A three-dimensional model containing tungsten electrodes, arc plasma and a weld pool is presented for double electrodes tungsten inert gas welding. The model is validated by available experimental data. The distributions of temperature, velocity and pressure of the coupled arc plasma are investigated. The current density, heat flux and shear stress over the weld pool are highlighted. The weld pool dynamic is described by taking into account buoyance, Lorentz force, surface tension and plasma drag force. The turbulent effect in the weld pool is also considered. It is found that the temperature and velocity distributions of the coupled arc are not rotationally symmetrical. A similar property is also shown by the arc pressure, current density and heat flux at the anode surface. The surface tension gradient is much larger than the plasma drag force and dominates the convective pattern in the weld pool, thus determining the weld penetration. The anodic heat flux and plasma drag force, as well as the surface tension gradient over the weld pool, determine the weld shape and size. In addition, provided the welding current through one electrode increases and that through the other decreases, keeping the total current unchanged, the coupled arc behaviour and weld pool dynamic change significantly, while the weld shape and size show little change. The results demonstrate the necessity of a unified model in the study of the arc plasma and weld pool.
Double-Porosity Models for a Fissured Groundwater Reservoir With Fracture Skin
NASA Astrophysics Data System (ADS)
Moench, Allen F.
1984-07-01
Theories of flow to a well in a double-porosity groundwater reservoir are modified to incorporate effects of a thin layer of low-permeability material or fracture skin that may be present at fracture-block interfaces as a result of mineral deposition or alteration. The commonly used theory for flow in double- porosity formations that is based upon the assumption of pseudo-steady state block-to-fissure flow is shown to be a special case of the theory presented in this paper. The latter is based on the assumption of transient block-to-fissure flow with fracture skin. Under conditions where fracture skin has a hydraulic conductivity that is less than that of the matrix rock, it may be assumed to impede the interchange of fluid between the fissures and blocks. Resistance to flow at fracture-block interfaces tends to reduce spatial variation of hydraulic head gradients within the blocks. This provides theoretical justification for neglecting the divergence of flow in the blocks as required by the pseudo-steady state flow model. Coupled boundary value problems for flow to a well discharging at a constant rate were solved in the Laplace domain. Both slab-shaped and sphere-shaped blocks were considered, as were effects of well bore storage and well bore skin. Results obtained by numerical inversion were used to construct dimensionless-type curves that were applied to well test data, for a pumped well and for an observation well, from the fractured volcanic rock terrane of the Nevada Test Site.
NASA Astrophysics Data System (ADS)
Gallmeier, F. X.; Iverson, E. B.; Lu, W.; Baxter, D. V.; Muhrer, G.; Ansell, S.
2016-04-01
Neutron transport simulation codes are indispensable tools for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modeled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4, and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential phenomena for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal's Bragg cut-off from locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon and void layers. Finally we simulated the convoluted moderator experiments described by Iverson et al. and found satisfactory agreement between the measurements and the simulations performed with the tools we have developed.
Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks
Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni
2015-01-01
Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298
Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni
2015-01-01
Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient's response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a "radiomics" approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298
Symmetry-adapted digital modeling II. The double-helix B-DNA.
Janner, A
2016-05-01
The positions of phosphorus in B-DNA have the remarkable property of occurring (in axial projection) at well defined points in the three-dimensional space of a projected five-dimensional decagonal lattice, subdividing according to the golden mean ratio τ:1:τ [with τ = (1+\\sqrt {5})/2] the edges of an enclosing decagon. The corresponding planar integral indices n1, n2, n3, n4 (which are lattice point coordinates) are extended to include the axial index n5 as well, defined for each P position of the double helix with respect to the single decagonal lattice ΛP(aP, cP) with aP = 2.222 Å and cP = 0.676 Å. A finer decagonal lattice Λ(a, c), with a = aP/6 and c = cP, together with a selection of lattice points for each nucleotide with a given indexed P position (so as to define a discrete set in three dimensions) permits the indexing of the atomic positions of the B-DNA d(AGTCAGTCAG) derived by M. J. P. van Dongen. This is done for both DNA strands and the single lattice Λ. Considered first is the sugar-phosphate subsystem, and then each nucleobase guanine, adenine, cytosine and thymine. One gets in this way a digital modeling of d(AGTCAGTCAG) in a one-to-one correspondence between atomic and indexed positions and a maximal deviation of about 0.6 Å (for the value of the lattice parameters given above). It is shown how to get a digital modeling of the B-DNA double helix for any given code. Finally, a short discussion indicates how this procedure can be extended to derive coarse-grained B-DNA models. An example is given with a reduction factor of about 2 in the number of atomic positions. A few remarks about the wider interest of this investigation and possible future developments conclude the paper. PMID:27126108
Neutrinoless double-β decay of 48Ca in the shell model: Closure versus nonclosure approximation
NASA Astrophysics Data System (ADS)
Sen'kov, R. A.; Horoi, M.
2013-12-01
Neutrinoless double-β decay (0νββ) is a unique process that could reveal physics beyond the Standard Model. Essential ingredients in the analysis of 0νββ rates are the associated nuclear matrix elements. Most of the approaches used to calculate these matrix elements rely on the closure approximation. Here we analyze the light neutrino-exchange matrix elements of 48Ca 0νββ decay and test the closure approximation in a shell-model approach. We calculate the 0νββ nuclear matrix elements for 48Ca using both the closure approximation and a nonclosure approach, and we estimate the uncertainties associated with the closure approximation. We demonstrate that the nonclosure approach has excellent convergence properties which allow us to avoid unmanageable computational cost. Combining the nonclosure and closure approaches we propose a new method of calculation for 0νββ decay rates which can be applied to the 0νββ decay rates of heavy nuclei, such as 76Ge or 82Se.
Osmotic pressure of ionic liquids in an electric double layer: Prediction based on a continuum model
NASA Astrophysics Data System (ADS)
Moon, Gi Jong; Ahn, Myung Mo; Kang, In Seok
2015-12-01
An analysis has been performed for the osmotic pressure of ionic liquids in the electric double layer (EDL). By using the electromechanical approach, we first derive a differential equation that is valid for computing the osmotic pressure in the continuum limit of any incompressible fluid in EDL. Then a specific model for ionic liquids proposed by Bazant et al. [M. Z. Bazant, B. D. Storey, and A. A. Kornyshev, Phys. Rev. Lett. 106, 046102 (2011), 10.1103/PhysRevLett.106.046102] is adopted for more detailed computation of the osmotic pressure. Ionic liquids are characterized by the correlation and the steric effects of ions and their effects are analyzed. In the low voltage cases, the correlation effect is dominant and the problem becomes linear. For this low voltage limit, a closed form formula is derived for predicting the osmotic pressure in EDL with no overlapping. It is found that the osmotic pressure decreases as the correlation effect increases. The osmotic pressures at the nanoslit surface and nanoslit centerline are also obtained for the low voltage limit. For the cases of moderately high voltage with high correlation factor, approximate formulas are derived for estimating osmotic pressure values based on the concept of a condensed layer near the electrode. In order to corroborate the results predicted by analytical studies, the full nonlinear model has been solved numerically.
Probing flavor models with ^{ {76}}Ge-based experiments on neutrinoless double-β decay
NASA Astrophysics Data System (ADS)
Agostini, Matteo; Merle, Alexander; Zuber, Kai
2016-04-01
The physics impact of a staged approach for double-β decay experiments based on ^{ {76}}Ge is studied. The scenario considered relies on realistic time schedules envisioned by the Gerda and the Majorana collaborations, which are jointly working towards the realization of a future larger scale ^{ {76}}Ge experiment. Intermediate stages of the experiments are conceived to perform quasi background-free measurements, and different data sets can be reliably combined to maximize the physics outcome. The sensitivity for such a global analysis is presented, with focus on how neutrino flavor models can be probed already with preliminary phases of the experiments. The synergy between theory and experiment yields strong benefits for both sides: the model predictions can be used to sensibly plan the experimental stages, and results from intermediate stages can be used to constrain whole groups of theoretical scenarios. This strategy clearly generates added value to the experimental efforts, while at the same time it allows to achieve valuable physics results as early as possible.
Spin excitation spectra of iron-based superconductors from the degenerate double-exchange model
NASA Astrophysics Data System (ADS)
Leong, Zhidong; Lee, Wei-Cheng; Lv, Weicheng; Phillips, Philip
2014-03-01
Using a degenerate double-exchange model, we investigate the spin excitation spectra of iron pnictides. The model consists of local spin moments on each Fe site as well as itinerant electrons from the degenerate dxz and dyz orbitals. The local moments interact with each other through antiferromagnetic J1-J2 Heisenberg interactions, and they couple to the itinerant electrons through a ferromagnetic Hund's coupling. We employ the fermionic spinon representation for the local moments and perform a generalized RPA calculation on both spinons and itinerant electrons. We find that in the (π,0) magnetically-ordered state, the spin-wave excitation at (π, π) is pushed to a higher energy due to the presence of itinerant electrons, which is consistent with the previous study using Holstein-Primakoff transformation. In the non-ordered state, the particle-hole continuum keeps the collective spin excitation near (π, π) at a higher energy even without any C4 symmetry breaking. The implications for the recent neutron scattering measurement at high temperature will be discussed.
Model for suturing of Superior and Churchill plates: An example of double indentation tectonics
NASA Astrophysics Data System (ADS)
Gibb, R. A.
1983-07-01
Recent gravity surveys in eastern and southern Hudson Bay, Canada, have revealed, for the first time, the gravity anomaly pattern over the complete length of the proposed circum-Superior suture. A symmetrical distribution of linear, positive anomalies near the southern and eastern perimeters of Hudson Bay suggests a model in which suturing of Superior and Churchill protoplates was accomplished by subduction of oceanic lithosphere and by progressive double indentation of the rigid-plastic Churchill craton by the Thompson and Ungava salients of the rigid Superior protocontinent. Suturing was initiated at the Thompson salient with extrusion of Churchill material laterally along strike-slip faults into the Hudson Bay embayment. With continued subduction, indentation of the Churchill craton by the Ungava salient commenced, so that Churchill material was now extruded from two directions to fill the embayment of Hudson Bay. Following complete suturing of the Hudson Bay embayment, the motion of the Superior plate relative to the Churchill may have changed by about 90° E to facilitate complete closure of the predecessor of the Labrador Sea. The pattern of faulting and other major structural elements of northern Saskatchewan-Manitoba can be interpreted in terms of the proposed analogue model of plane indentation. The regional faults and their senses of motion correspond generally to that predicted by the theoretical pattern of slip lines associated with a wedge-shaped indenter.
Indo-Pacific ENSO modes in a double-basin Zebiak-Cane model
NASA Astrophysics Data System (ADS)
Wieners, Claudia; de Ruijter, Will; Dijkstra, Henk
2016-04-01
We study Indo-Pacific interactions on ENSO timescales in a double-basin version of the Zebiak-Cane ENSO model, employing both time integrations and bifurcation analysis (continuation methods). The model contains two oceans (the Indian and Pacific Ocean) separated by a meridional wall. Interaction between the basins is possible via the atmosphere overlaying both basins. We focus on the effect of the Indian Ocean (both its mean state and its variability) on ENSO stability. In addition, inspired by analysis of observational data (Wieners et al, Coherent tropical Indo-Pacific interannual climate variability, in review), we investigate the effect of state-dependent atmospheric noise. Preliminary results include the following: 1) The background state of the Indian Ocean stabilises the Pacific ENSO (i.e. the Hopf bifurcation is shifted to higher values of the SST-atmosphere coupling), 2) the West Pacific cooling (warming) co-occurring with El Niño (La Niña) is essential to simulate the phase relations between Pacific and Indian SST anomalies, 3) a non-linear atmosphere is needed to simulate the effect of the Indian Ocean variability onto the Pacific ENSO that is suggested by observations.
Glas, Julia; Dümcke, Sebastian; Zacher, Benedikt; Poron, Don; Gagneur, Julien; Tresch, Achim
2016-03-18
Hidden Markov models (HMMs) have been extensively used to dissect the genome into functionally distinct regions using data such as RNA expression or DNA binding measurements. It is a challenge to disentangle processes occurring on complementary strands of the same genomic region. We present the double-stranded HMM (dsHMM), a model for the strand-specific analysis of genomic processes. We applied dsHMM to yeast using strand specific transcription data, nucleosome data, and protein binding data for a set of 11 factors associated with the regulation of transcription.The resulting annotation recovers the mRNA transcription cycle (initiation, elongation, termination) while correctly predicting strand-specificity and directionality of the transcription process. We find that pre-initiation complex formation is an essentially undirected process, giving rise to a large number of bidirectional promoters and to pervasive antisense transcription. Notably, 12% of all transcriptionally active positions showed simultaneous activity on both strands. Furthermore, dsHMM reveals that antisense transcription is specifically suppressed by Nrd1, a yeast termination factor. PMID:26578558
A corrosive oesophageal burn model in rats: Double-lumen central venous catheter usage
Bakan, Vedat; Çıralık, Harun; Kartal, Seyfi
2015-01-01
Background: We aimed to create a new and less invasive experimental corrosive oesophageal burn model using a catheter without a gastric puncture (gastrotomy). Materials and Methods: We conducted the study with two groups composed of 8 male rats. The experimental oesophageal burn was established by the application of 10% sodium hydroxide to the distal oesophagus under a pressure of 20 cmH2O, via 5-F double-lumen central venous catheter without a gastrotomy. The control group was given 0.9% sodium chloride. All rats were killed 24 h after administration of NaOH or 0.9% NaCl. Histologic damage to oesophageal tissue was scored by a single pathologist blind to groups. Results: The rats in the control group were observed to have no pathological changes. Corrosive oesophagitis (tissue congestion, oedema, inflammation, ulcer and necrosis) was observed in rats exposed to NaOH. Conclusion: We believe that an experimental corrosive oesophageal burn can safely be created under same hydrostatic pressure without a gastric puncture using this model. PMID:26712289
Otsuki, Yosuke; Bui Minh, Nhat; Ohtake, Hiroshi; Watanabe, Go; Matsuzawa, Teruo
2013-01-01
Double aortic aneurysm (DAA) falls under the category of multiple aortic aneurysms. Repair is generally done through staged surgery due to low invasiveness. In this approach, one aneurysm is cured per operation. Therefore, two operations are required for DAA. However, post-first-surgery rupture cases have been reported. Although the problems involved with managing staged surgery have been discussed for more than 30 years, investigation from a hemodynamic perspective has not been attempted. Hence, this is the first computational fluid dynamics approach to the DAA problem. Three idealized geometries were prepared: presurgery, thoracic aortic aneurysm (TAA) cured, and abdominal aortic aneurysm (AAA) cured. By applying identical boundary conditions for flow rate and pressure, the Navier-Stokes equation and continuity equations were solved under the Newtonian fluid assumption. Average pressure in TAA was increased by AAA repair. On the other hand, average pressure in AAA was decreased after TAA repair. Average wall shear stress was decreased at the peak in post-first-surgery models. However, the wave profile of TAA average wall shear stress was changed in the late systole phase after AAA repair. Since the average wall shear stress in the post-first-surgery models decreased and pressure at TAA after AAA repair increased, the TAA might be treated first to prevent rupture. PMID:24348172
NASA Astrophysics Data System (ADS)
Li, Yonglong; Li, Zushu; Li, Jun; Wang, Niu
2007-12-01
Using double loop DC motor drive system (DLM) of the RoboCup middle size robots as research subject, the model of DLM has been reduced to a simple state space one by the "quasi-equivalent" modeling method, based on the characteristic analysis of the system. Further, the parameters of the model can be exactly identified applying the improved genetic algorithm. The results of compared experiment proved that using this model and identification ways can get a reasonably structural and high parameters precision model. The model can describe the DLM to design the control system for robot soccer.
Generating double knockout mice to model genetic intervention for diabetic cardiomyopathy in humans.
Chavali, Vishalakshi; Nandi, Shyam Sundar; Singh, Shree Ram; Mishra, Paras Kumar
2014-01-01
Diabetes is a rapidly increasing disease that enhances the chances of heart failure twofold to fourfold (as compared to age and sex matched nondiabetics) and becomes a leading cause of morbidity and mortality. There are two broad classifications of diabetes: type1 diabetes (T1D) and type2 diabetes (T2D). Several mice models mimic both T1D and T2D in humans. However, the genetic intervention to ameliorate diabetic cardiomyopathy in these mice often requires creating double knockout (DKO). In order to assess the therapeutic potential of a gene, that specific gene is either overexpressed (transgenic expression) or abrogated (knockout) in the diabetic mice. If the genetic mice model for diabetes is used, it is necessary to create DKO with transgenic/knockout of the target gene to investigate the specific role of that gene in pathological cardiac remodeling in diabetics. One of the important genes involved in extracellular matrix (ECM) remodeling in diabetes is matrix metalloproteinase-9 (Mmp9). Mmp9 is a collagenase that remains latent in healthy hearts but induced in diabetic hearts. Activated Mmp9 degrades extracellular matrix (ECM) and increases matrix turnover causing cardiac fibrosis that leads to heart failure. Insulin2 mutant (Ins2+/-) Akita is a genetic model for T1D that becomes diabetic spontaneously at the age of 3-4 weeks and show robust hyperglycemia at the age of 10-12 weeks. It is a chronic model of T1D. In Ins2+/- Akita, Mmp9 is induced. To investigate the specific role of Mmp9 in diabetic hearts, it is necessary to create diabetic mice where Mmp9 gene is deleted. Here, we describe the method to generate Ins2+/-/Mmp9-/- (DKO) mice to determine whether the abrogation of Mmp9 ameliorates diabetic cardiomyopathy. PMID:25064116
Branz, Tanja; Faessler, Amand; Gutsche, Thomas; Lyubovitskij, Valery E.; Oexl, Bettina; Ivanov, Mikhail A.; Koerner, Juergen G.
2010-06-01
We study flavor-conserving radiative decays of double-heavy baryons using a manifestly Lorentz covariant constituent three-quark model. Decay rates are calculated and compared to each other in the full theory, keeping masses finite, and also in the heavy quark limit. We discuss in some detail hyperfine mixing effects.
ERIC Educational Resources Information Center
Pakenham, Kenneth I.; Samios, Christina; Sofronoff, Kate
2005-01-01
The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between…
The Double ABCX Model of Adaptation in Racially Diverse Families with a School-Age Child with Autism
ERIC Educational Resources Information Center
Manning, Margaret M.; Wainwright, Laurel; Bennett, Jillian
2011-01-01
In this study, the Double ABCX model of family adaptation was used to explore the impact of severity of autism symptoms, behavior problems, social support, religious coping, and reframing, on outcomes related to family functioning and parental distress. The sample included self-report measures collected from 195 families raising school-age…
NASA Astrophysics Data System (ADS)
Yadir, S.; Assal, S.; El Rhassouli, A.; Sidki, M.; Benhmida, M.
2013-11-01
In this paper, we propose and apply a new technique for extracting physical parameters of solar cell double exponential model with two ideality factor constants (DECM) from illuminated current-voltage (I-V) experimental characteristics. The equivalent circuit of solar cell includes two constant diodes ideality factors (n1 = 1, n2 = 2) with two saturation currents I0D and I0R, a current generator intensity Iph, a series resistor RS and a conductance GP. A set of current-voltage characteristics are generated by injecting various RS values in the characteristic equation. Using the area error rate ("%ΔArea,") between the experimental and extracted (I-V) characteristics, the value of RS is deduced as the minimum of this error. The obtained results show a good agreement with the experimental characteristics measured on a commercial polycrystalline solar cell.
Double-porosity rock model and squirt flow in the laboratory frequency band
NASA Astrophysics Data System (ADS)
Ba, Jing; Cao, Hong; Yao, Fengchang; Nie, Jianxin; Yang, Huizhu
2008-12-01
Biot theory research has been extended to the multi-scale heterogeneity in actual rocks. Focused on laboratory frequency bandwidth studies, we discuss the relationships between double-porosity and BISQ wave equations, analytically derive the degeneration method for double-porosity’s return to BISQ, and give three necessary conditions which the degeneration must satisfy. By introducing dynamic permeability and tortuosity theory, a full set of dynamic double-porosity wave equations are derived. A narrow band approximation is made to simplify the numerical simulation for dynamic double-porosity wavefields. Finally, the pseudo-spectral method is used for wave simulation within the laboratory frequency band (50 kHz). Numerical results have proved the feasibility for dynamic double-porosity’s description of squirt flow and the validity of the quasi-static approximation method.
Quang, Daniel; Xie, Xiaohui
2016-01-01
Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory ‘grammar’ to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ. PMID:27084946
Quang, Daniel; Xie, Xiaohui
2016-06-20
Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ. PMID:27084946
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
JACKSON VL
2011-08-31
The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.
NASA Astrophysics Data System (ADS)
Richter, Anke; Brendler, Vinzenz; Nebelung, Cordula
2005-06-01
The paper presents examples illustrating the current blind predictive capabilities of the diffuse double layer model (DDLM) as the model requiring the smallest set of parameters and thus being most suitable for substituting even more empiric sorption approaches such as distribution coefficients K D. The general strategy for the selection of numerical data are discussed. Based on the information about the minerals compiled in the sorption database RES 3T (Rossendorf Expert System for Surface and Sorption Thermodynamics), first a set of relevant surface species is generated. Then relevant surface complexation parameters are taken from RES 3T: the binding site density for the minerals, the surface protolysis constants, and the stability constants for all relevant surface complexes. To be able to compare and average thermodynamic constants originating from different sources, a normalization concept is applied. Our demonstration is based on a blind prediction exercise, i.e., the goal was not to provide optimal fits. The system considered is Cu(II) sorption onto goethite. The predictions are compared with raw data from three independent experimental investigations. The calculations were performed with the FITEQL 3.2 code. In most cases the model predictions represented the experimental sorption values for the sorbed amount of Cu(II), expressed as conventional distribution coefficients, within one order of magnitude or better. We conclude that the application of DDLM can indeed be used for estimating distribution coefficients for contaminants in well defined mineral systems. A stepwise strategy of species selection, data collection, normalization, and averaging is outlined. The SCM database so far assembled within the RES 3T project is able to provide the parameter sets.
Enhanced Climatic Warming in the Tibetan Plateau Due to Double CO2: A Model Study
NASA Technical Reports Server (NTRS)
Chen, Baode; Chao, Winston C.; Liu, Xiao-Dong; Lau, William K. M. (Technical Monitor)
2001-01-01
The NCAR (National Center for Atmospheric Research) regional climate model (RegCM2) with time-dependent lateral meteorological fields provided by a 130-year transient increasing CO2 simulation of the NCAR Climate System Model (CSM) has been used to investigate the mechanism of enhanced ground temperature warming over the TP (Tibetan Plateau). From our model results, a remarkable tendency of warming increasing with elevation is found for the winter season, and elevation dependency of warming is not clearly recognized in the summer season. This simulated feature of elevation dependency of ground temperature is consistent with observations. Based on an analysis of surface energy budget, the short wave solar radiation absorbed at the surface plus downward long wave flux reaching the surface shows a strong elevation dependency, and is mostly responsible for enhanced surface warming over the TP. At lower elevations, the precipitation forced by topography is enhanced due to an increase in water vapor supply resulted from a warming in the atmosphere induced by doubling CO2. This precipitation enhancement must be associated with an increase in clouds, which results in a decline in solar flux reaching surface. At higher elevations, large snow depletion is detected in the 2xCO2run. It leads to a decrease in albedo, therefore more solar flux is absorbed at the surface. On the other hand, much more uniform increase in downward long wave flux reaching the surface is found. The combination of these effects (i.e. decrease in solar flux at lower elevations, increase in solar flux at higher elevation and more uniform increase in downward long wave flux) results in elevation dependency of enhanced ground temperature warming over the TP.
Wang, Dongfang; Gao, Guodong; Plunkett, Mark; Zhao, Guangfeng; Topaz, Stephen; Ballard-Croft, Cherry; Zwischenberger, Joseph B.
2014-01-01
Objective The AvalonElite™ double lumen cannula (DLC) can provide effective cavopulmonary assistance (CPA) in a Fontan (TCPC) sheep model, but it requires strict alignment. The objective was to fabricate and test a newly designed paired umbrellas DLC without alignment requirement. Methods The paired membrane umbrellas were designed on the DLC to bracket infusion blood flow toward the pulmonary artery. Two umbrellas were attached, one 4 cm above and one 4 cm below infusion opening. Umbrellas were temporarily wrapped and glued to DLC body to facilitate insertion. A TCPC mock loop was used to test CPA performance and reliability with DLC rotation and displacement. The paired umbrella DLC was also tested in a TCPC adult sheep model (n=6). Results The bench test showed up to 4.5 l/min pumping flow and about 90% pumping flow efficiency at 360° rotation and 8 cm displacement of DLC. The TCPC model with compromised hemodynamics was successfully created in all 6 sheep. The CPA DLC with paired umbrellas was smoothly inserted into SVC and extracardiac conduit in all sheep. At 3.5–4.0 l/min pump flow, the sABP and CVP returned to normal baseline and remained stable throughout 90 min experiment, demonstrating effective CPA support. DLC Rotation and displacement did not affect performance. Autopsy revealed well opened and positioned paired umbrellas, and DLCs were easily removed from RJV. Conclusions Our DLC with paired umbrellas is easy to insert and remove. The paired umbrellas eliminated the strict alignment requirement and assured consistent CPA performance. (245 Words) PMID:24930609
Double Roles of Macrophages in Human Neuroimmune Diseases and Their Animal Models
Fan, Xueli; Zhang, Hongliang; Cheng, Yun; Jiang, Xinmei; Zhu, Jie
2016-01-01
Macrophages are important immune cells of the innate immune system that are involved in organ-specific homeostasis and contribute to both pathology and resolution of diseases including infections, cancer, obesity, atherosclerosis, and autoimmune disorders. Multiple lines of evidence point to macrophages as a remarkably heterogeneous cell type. Different phenotypes of macrophages exert either proinflammatory or anti-inflammatory roles depending on the cytokines and other mediators that they are exposed to in the local microenvironment. Proinflammatory macrophages secrete detrimental molecules to induce disease development, while anti-inflammatory macrophages produce beneficial mediators to promote disease recovery. The conversion of the phenotypes of macrophages can regulate the initiation, development, and recovery of autoimmune diseases. Human neuroimmune diseases majorly include multiple sclerosis (MS), neuromyelitis optica (NMO), myasthenia gravis (MG), and Guillain-Barré syndrome (GBS) and macrophages contribute to the pathogenesis of these neuroimmune diseases. In this review, we summarize the double roles of macrophage in neuroimmune diseases and their animal models to further explore the mechanisms of macrophages involved in the pathogenesis of these disorders, which may provide a potential therapeutic approach for these disorders in the future. PMID:27034594
Nilson, R.H.; Lie, K.H. )
1987-12-01
A double-porosity model is used to describe the oscillatory gas motion and associated contaminant transport induced by cyclical variations in the barometric pressure at the surface of a fractured porous medium. Flow along the fractures and within the permeable matrix blocks is locally one-dimensional. The interaction between fractures and blocks includes the Darcian seepage of fluid as well as the Fickian diffusion of contaminant. To guard against artificial numerical diffusion, the FRAM filtering remedy and methodology of Chapman is used in calculating the advective fluxes along fractures and within blocks. The entire system of equations, including the fracture/matrix interaction terms, is solved by a largely implicit non-computational time step is large compared to the cross-block transit time of Darcian pressure waves. The numerical accuracy is tested by comparison with exact solutions for oscillatory and unidirectional flows, some of which include Darcian seepage or Fickian diffusion interaction between the fracture and the matrix. The method is used to estimate the rate of transport of radioactive gases through the rubblized chimney produced by an underground nuclear explosion.
Modeling and Control of a Double-effect Absorption Refrigerating Machine
NASA Astrophysics Data System (ADS)
Hihara, Eiji; Yamamoto, Yuuji; Saito, Takamoto; Nagaoka, Yoshikazu; Nishiyama, Noriyuki
For the purpose of impoving the response to cooling load variations and the part load characteristics, the optimal operation of a double-effect absorption refrigerating machine was investigated. The test machine was designed to be able to control energy input and weak solution flow rate continuously. It is composed of a gas-fired high-temperature generator, a separator, a low-temperature generator, an absorber, a condenser, an evaporator, and high- and low-temperature heat exchangers. The working fluid is Lithium Bromide and water solution. The standard output is 80 kW. Based on the experimental data, a simulation model of the static characteristics was developed. The experiments and simulation analysis indicate that there is an optimal weak solution flow rate which maximizes the coefficient of performance under any given cooling load condition. The optimal condition is closely related to the refrigerant steam flow rate flowing from the separator to the high temperature heat exchanger with the medium solution. The heat transfer performance of heat exchangers in the components influences the COP. The change in the overall heat transfer coefficient of absorber has much effect on the COP compared to other components.
Estimates of frequency-dependent compressibility from a quasistatic double-porosity model
Berryman, J. G.; Wang, H. F.
1998-09-16
Gassmann's relationship between the drained and undrained bulk modulus of a porous medium is often used to relate the dry bulk modulus to the saturated bulk modulus for elastic waves, because the compressibility of air is considered so high that the dry rock behaves in a drained fashion and the frequency of elastic waves is considered so high that the saturated rock behaves in an undrained fashion. The bulk modulus calculated from ultrasonic velocities, however, often does not match the Gassmann prediction. Mavko and Jizba examined how local flow effects and unequilibrated pore pressures can lead to greater stiffnesses. Their conceptual model consists of a distribution of porosities obtained from the strain-versus-confining-pressure behavior. Stiff pores that close at higher confining pressures are considered to remain undrained (unrelaxed) while soft pores drain even for high-frequency stress changes. If the pore shape distribution is bimodal, then the rock approximately satisfies the assumptions of a double-porosity, poroelastic material. Berryman and Wang [1995] established linear constitutive equations and identified four different time scales of ow behavior: (1) totally drained, (2) soft pores are drained but stiff pores are undrained, (3) soft and stiff pores are locally equilibrated, but undrained beyond the grain scale, and (4) both soft and stiff pores are undrained. The relative magnitudes of the four associated bulk moduli will be examined for all four moduli and illustrated for several sandstones.
``Squishy capacitor'' model for electrical double layers and the stability of charged interfaces
NASA Astrophysics Data System (ADS)
Partenskii, Michael B.; Jordan, Peter C.
2009-07-01
Negative capacitance (NC), predicted by various electrical double layer (EDL) theories, is critically reviewed. Physically possible for individual components of the EDL, the compact or diffuse layer, it is strictly prohibited for the whole EDL or for an electrochemical cell with two electrodes. However, NC is allowed for the artificial conditions of σ control, where an EDL is described by the equilibrium electric response of electrolyte to a field of fixed, and typically uniform, surface charge-density distributions, σ . The contradiction is only apparent; in fact local σ cannot be set independently, but is established by the equilibrium response to physically controllable variables, i.e., applied voltage ϕ ( ϕ control) or total surface charge q ( q control). NC predictions in studies based on σ control signify potential instabilities and phase transitions for physically realizable conditions. Building on our previous study of ϕ control [M. B. Partenskii and P. C. Jordan, Phys. Rev. E 77, 061117 (2008)], here we analyze critical behavior under q control, clarifying the basic picture using an exactly solvable “squishy capacitor” toy model. We find that ϕ can change discontinuously in the presence of a lateral transition, specify stability conditions for an electrochemical cell, analyze the origin of the EDL’s critical point in terms of compact and diffuse serial contributions, and discuss perspectives and challenges for theoretical studies not limited by σ control.
Chmely, S. C.; McKinney, K. A.; Lawrence, K. R.; Sturgeon, M.; Katahira, R.; Beckham, G. T.
2013-01-01
Lignin is an underutilized value stream in current biomass conversion technologies because there exist no economic and technically feasible routes for lignin depolymerization and upgrading. Base-catalyzed deconstruction (BCD) has been applied for lignin depolymerization (e.g., the Kraft process) in the pulp and paper industry for more than a century using aqueous-phase media. However, these efforts require treatment to neutralize the resulting streams, which adds significantly to the cost of lignin deconstruction. To circumvent the need for downstream treatment, here we report recent advances in the synthesis of layered double hydroxide and metal oxide catalysts to be applied to the BCD of lignin. These catalysts may prove more cost-effective than liquid-phase, non-recyclable base, and their use obviates downstream processing steps such as neutralization. Synthetic procedures for various transition-metal containing catalysts, detailed kinetics measurements using lignin model compounds, and results of the application of these catalysts to biomass-derived lignin will be presented.
An in vivo model of double-unit cord blood transplantation that correlates with clinical engraftment
Eldjerou, Lamis K.; Chaudhury, Sonali; Baisre-de Leon, Ada; He, Mai; Arcila, Maria E.; Heller, Glenn; O'Reilly, Richard J.; Moore, Malcolm A.
2010-01-01
Double-unit cord blood transplantation (DCBT) appears to enhance engraftment despite sustained hematopoiesis usually being derived from a single unit. To investigate DCBT biology, in vitro and murine models were established using cells from 39 patient grafts. Mononuclear cells (MNCs) and CD34+ cells from each unit alone and in DCB combination were assessed for colony-forming cell and cobblestone area-forming cell potential, and multilineage engraftment in NOD/SCID/IL2R-γnull mice. In DCB assays, the contribution of each unit was measured by quantitative short tandem repeat region analysis. There was no correlation between colony-forming cell (n = 10) or cobblestone area-forming cell (n = 9) numbers and clinical engraftment, and both units contributed to DCB cocultures. In MNC transplantations in NOD/SCID/IL2R-γnull mice, each unit engrafted alone, but MNC DCBT demonstrated single-unit dominance that correlated with clinical engraftment in 18 of 21 cases (86%, P < .001). In contrast, unit dominance and clinical correlation were lost with CD34+ DCBT (n = 11). However, add-back of CD34− to CD34+ cells (n = 20) restored single-unit dominance with the dominant unit correlating not with clinical engraftment but also with the origin of the CD34− cells in all experiments. Thus, unit dominance is an in vivo phenomenon probably associated with a graft-versus-graft immune interaction mediated by CD34− cells. PMID:20587781
MULTI-DIMENSIONAL MODELS FOR DOUBLE DETONATION IN SUB-CHANDRASEKHAR MASS WHITE DWARFS
Moll, R.; Woosley, S. E.
2013-09-10
Using two-dimensional and three-dimensional simulations, we study the ''robustness'' of the double detonation scenario for Type Ia supernovae, in which a detonation in the helium shell of a carbon-oxygen white dwarf induces a secondary detonation in the underlying core. We find that a helium detonation cannot easily descend into the core unless it commences (artificially) well above the hottest layer calculated for the helium shell in current presupernova models. Compressional waves induced by the sliding helium detonation, however, robustly generate hot spots which trigger a detonation in the core. Our simulations show that this is true even for non-axisymmetric initial conditions. If the helium is ignited at multiple points, then the internal waves can pass through one another or be reflected, but this added complexity does not defeat the generation of the hot spot. The ignition of very low-mass helium shells depends on whether a thermonuclear runaway can simultaneously commence in a sufficiently large region.
Bailly, Lucie; Henrich, Nathalie; Pelorson, Xavier
2010-05-01
Occurrences of period-doubling are found in human phonation, in particular for pathological and some singing phonations such as Sardinian A Tenore Bassu vocal performance. The combined vibration of the vocal folds and the ventricular folds has been observed during the production of such low pitch bass-type sound. The present study aims to characterize the physiological correlates of this acoustical production and to provide a better understanding of the physical interaction between ventricular fold vibration and vocal fold self-sustained oscillation. The vibratory properties of the vocal folds and the ventricular folds during phonation produced by a professional singer are analyzed by means of acoustical and electroglottographic signals and by synchronized glottal images obtained by high-speed cinematography. The periodic variation in glottal cycle duration and the effect of ventricular fold closing on glottal closing time are demonstrated. Using the detected glottal and ventricular areas, the aerodynamic behavior of the laryngeal system is simulated using a simplified physical modeling previously validated in vitro using a larynx replica. An estimate of the ventricular aperture extracted from the in vivo data allows a theoretical prediction of the glottal aperture. The in vivo measurements of the glottal aperture are then compared to the simulated estimations. PMID:21117769
Method for Veterbi decoding of large constraint length convolutional codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)
1988-01-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
Deep convolutional neural networks for classifying GPR B-scans
NASA Astrophysics Data System (ADS)
Besaw, Lance E.; Stimac, Philip J.
2015-05-01
Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.
Toward an optimal convolutional neural network for traffic sign recognition
NASA Astrophysics Data System (ADS)
Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec
2015-12-01
Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.
Multi-modal vertebrae recognition using Transformed Deep Convolution Network.
Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo
2016-07-01
Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. PMID:27104497
Remote Sensing Image Fusion with Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Zhong, Jinying; Yang, Bin; Huang, Guoyu; Zhong, Fei; Chen, Zhongze
2016-12-01
Remote sensing image fusion (RSIF) is referenced as restoring the high-resolution multispectral image from its corresponding low-resolution multispectral (LMS) image aided by the panchromatic (PAN) image. Most RSIF methods assume that the missing spatial details of the LMS image can be obtained from the high resolution PAN image. However, the distortions would be produced due to the much difference between the structural component of LMS image and that of PAN image. Actually, the LMS image can fully utilize its spatial details to improve the resolution. In this paper, a novel two-stage RSIF algorithm is proposed, which makes full use of both spatial details and spectral information of the LMS image itself. In the first stage, the convolutional neural network based super-resolution is used to increase the spatial resolution of the LMS image. In the second stage, Gram-Schmidt transform is employed to fuse the enhanced MS and the PAN images for further improvement the resolution of MS image. Since the spatial resolution enhancement in the first stage, the spectral distortions in the fused image would be decreased in evidence. Moreover, the spatial details can be preserved to construct the fused images. The QuickBird satellite source images are used to test the performances of the proposed method. The experimental results demonstrate that the proposed method can achieve better spatial details and spectral information simultaneously compared with other well-known methods.
A discrete convolution kernel for No-DC MRI
NASA Astrophysics Data System (ADS)
Zeng, Gengsheng L.; Li, Ya
2015-08-01
An analytical inversion formula for the exponential Radon transform with an imaginary attenuation coefficient was developed in 2007 (2007 Inverse Problems 23 1963-71). The inversion formula in that paper suggested that it is possible to obtain an exact MRI (magnetic resonance imaging) image without acquiring low-frequency data. However, this un-measured low-frequency region (ULFR) in the k-space (which is the two-dimensional Fourier transform space in MRI terminology) must be very small. This current paper derives a FBP (filtered backprojection) algorithm based on You’s formula by suggesting a practical discrete convolution kernel. A point spread function is derived for this FBP algorithm. It is demonstrated that the derived FBP algorithm can have a larger ULFR than that in the 2007 paper. The significance of this paper is that we present a closed-form reconstruction algorithm for a special case of under-sampled MRI data. Usually, under-sampled MRI data requires iterative (instead of analytical) algorithms with L1-norm or total variation norm to reconstruct the image.
Adapting line integral convolution for fabricating artistic virtual environment
NASA Astrophysics Data System (ADS)
Lee, Jiunn-Shyan; Wang, Chung-Ming
2003-04-01
Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.
Cell osmotic water permeability of isolated rabbit proximal convoluted tubules.
Carpi-Medina, P; González, E; Whittembury, G
1983-05-01
Cell osmotic water permeability, Pcos, of the peritubular aspect of the proximal convoluted tubule (PCT) was measured from the time course of cell volume changes subsequent to the sudden imposition of an osmotic gradient, delta Cio, across the cell membrane of PCT that had been dissected and mounted in a chamber. The possibilities of artifact were minimized. The bath was vigorously stirred, the solutions could be 95% changed within 0.1 s, and small osmotic gradients (10-20 mosM) were used. Thus, the osmotically induced water flow was a linear function of delta Cio and the effect of the 70-microns-thick unstirred layers was negligible. In addition, data were extrapolated to delta Cio = 0. Pcos for PCT was 41.6 (+/- 3.5) X 10(-4) cm3 X s-1 X osM-1 per cm2 of peritubular basal area. The standing gradient osmotic theory for transcellular osmosis is incompatible with this value. Published values for Pcos of PST are 25.1 X 10(-4), and for the transepithelial permeability Peos values are 64 X 10(-4) for PCT and 94 X 10(-4) for PST, in the same units. These results indicate that there is room for paracellular water flow in both nephron segments and that the magnitude of the transcellular and paracellular water flows may vary from one segment of the proximal tubule to another. PMID:6846543
Toward Content Based Image Retrieval with Deep Convolutional Neural Networks
Sklan, Judah E.S.; Plassard, Andrew J.; Fabbri, Daniel; Landman, Bennett A.
2015-01-01
Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128×128 to an output encoded layer of 4×384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This prelimainry effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques. PMID:25914507
Turbo-decoding of a convolutionally encoded OCDMA system
NASA Astrophysics Data System (ADS)
Efinger, Daniel; Fritsch, Robert
2005-02-01
We present a novel multiple access scheme for Passive Optical Networks (PON) based on optical Code Division Multiple Access (OCDMA). Di erent from existing proposals for implementing OCDMA, we replaced the predominating orthogonal or weakly correlated signature codes (e.g. Walsh-Hadamard codes (WHC)) by convolutional codes. Thus CDMA user separation and forward error correction (FEC) are combined. The transmission of the coded bits over the multiple access fiber is carried through optical BPSK. This requires electrical field strength detection rather than direct detection (DD) at the receiver end. Since orthogonality gets lost, we have to employ a multiuser receiver to overcome the inherently strong correlation. Computational complexity of multiuser detection is the major challenge and we show how complexity can be reduced by applying the turbo principle known from soft-decoding of concatenated codes. The convergence behavior of the iterative multiuser receiver is investigated by means of extrinsic information transfer charts (EXIT-chart). Finally, we present simulation results of bit error ratio (BER) vs. signal-to-noise ratio (SNR) including a standard single mode fiber in order to demonstrate the superior performance of the proposed scheme compared to those using orthogonal spreading techniques.
A deep convolutional neural network for recognizing foods
NASA Astrophysics Data System (ADS)
Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec
2015-12-01
Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.
An equilibrium double-twist model for the radial structure of collagen fibrils.
Brown, Aidan I; Kreplak, Laurent; Rutenberg, Andrew D
2014-11-14
Mammalian tissues contain networks and ordered arrays of collagen fibrils originating from the periodic self-assembly of helical 300 nm long tropocollagen complexes. The fibril radius is typically between 25 to 250 nm, and tropocollagen at the surface appears to exhibit a characteristic twist-angle with respect to the fibril axis. Similar fibril radii and twist-angles at the surface are observed in vitro, suggesting that these features are controlled by a similar self-assembly process. In this work, we propose a physical mechanism of equilibrium radius control for collagen fibrils based on a radially varying double-twist alignment of tropocollagen within a collagen fibril. The free-energy of alignment is similar to that of liquid crystalline blue phases, and we employ an analytic Euler-Lagrange and numerical free energy minimization to determine the twist-angle between the molecular axis and the fibril axis along the radial direction. Competition between the different elastic energy components, together with a surface energy, determines the equilibrium radius and twist-angle at the fibril surface. A simplified model with a twist-angle that is linear with radius is a reasonable approximation in some parameter regimes, and explains a power-law dependence of radius and twist-angle at the surface as parameters are varied. Fibril radius and twist-angle at the surface corresponding to an equilibrium free-energy minimum are consistent with existing experimental measurements of collagen fibrils. Remarkably, in the experimental regime, all of our model parameters are important for controlling equilibrium structural parameters of collagen fibrils. PMID:25238208
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy. PMID:26734567
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy. PMID:26734567
Convoluted nozzle design for the RL10 derivative 2B engine
NASA Technical Reports Server (NTRS)
1985-01-01
The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
2014-01-01
This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.
Convolutions of Hilbert Modular Forms and Their Non-Archimedean Analogues
NASA Astrophysics Data System (ADS)
Panchishkin, A. A.
1989-02-01
The author constructs non-Archimedean analytic functions which interpolate special values of the convolution of two Hilbert cusp forms on a product of complex upper half-planes.Bibliography: 15 titles.
Probing new physics models of neutrinoless double beta decay with SuperNEMO
NASA Astrophysics Data System (ADS)
Arnold, R.; Augier, C.; Baker, J.; Barabash, A. S.; Basharina-Freshville, A.; Bongrand, M.; Brudanin, V.; Caffrey, A. J.; Cebrián, S.; Chapon, A.; Chauveau, E.; Dafni, T.; Deppisch, F. F.; Diaz, J.; Durand, D.; Egorov, V.; Evans, J. J.; Flack, R.; Fushima, K.-I.; Irastorza, I. García; Garrido, X.; Gómez, H.; Guillon, B.; Holin, A.; Holy, K.; Horkley, J. J.; Hubert, P.; Hugon, C.; Iguaz, F. J.; Ishihara, N.; Jackson, C. M.; Jullian, S.; Kauer, M.; Kochetov, O.; Konovalov, S. I.; Kovalenko, V.; Lamhamdi, T.; Lang, K.; Lutter, G.; Luzón, G.; Mamedov, F.; Marquet, C.; Mauger, F.; Monrabal, F.; Nachab, A.; Nasteva, I.; Nemchenok, I.; Nguyen, C. H.; Nomachi, M.; Nova, F.; Ohsumi, H.; Pahlka, R. B.; Perrot, F.; Piquemal, F.; Povinec, P. P.; Richards, B.; Ricol, J. S.; Riddle, C. L.; Rodríguez, A.; Saakyan, R.; Sarazin, X.; Sedgbeer, J. K.; Serra, L.; Shitov, Y.; Simard, L.; Šimkovic, F.; Söldner-Rembold, S.; Štekl, I.; Sutton, C. S.; Tamagawa, Y.; Thomas, J.; Timkin, V.; Tretyak, V.; Tretyak, V. I.; Umatov, V. I.; Vanyushin, I. A.; Vasiliev, R.; Vasiliev, V.; Vorobel, V.; Waters, D.; Yahlali, N.; Žukauskas, A.
2010-12-01
The possibility to probe new physics scenarios of light Majorana neutrino exchange and right-handed currents at the planned next generation neutrinoless double β decay experiment SuperNEMO is discussed. Its ability to study different isotopes and track the outgoing electrons provides the means to discriminate different underlying mechanisms for the neutrinoless double β decay by measuring the decay half-life and the electron angular and energy distributions.
Sannino, Annalisa
2016-03-01
This study explores what human conduct looks like when research embraces uncertainty and distance itself from the dominant methodological demands of control and predictability. The context is the waiting experiment originally designed in Kurt Lewin's research group, discussed by Vygotsky as an instance among a range of experiments related to his notion of double stimulation. Little attention has been paid to this experiment, despite its great heuristic potential for charting the terrain of uncertainty and agency in experimental settings. Behind the notion of double stimulation lays Vygotsky's distinctive view of human beings' ability to intentionally shape their actions. Accordingly, human beings in situations of uncertainty and cognitive incongruity can rely on artifacts which serve the function of auxiliary motives and which help them undertake volitional actions. A double stimulation model depicting how such actions emerge is tested in a waiting experiment conducted with collectives, in contrast with a previous waiting experiment conducted with individuals. The model, validated in the waiting experiment with individual participants, applies only to a limited extent to the collectives. The analysis shows the extent to which double stimulation takes place in the waiting experiment with collectives, the differences between the two experiments, and what implications can be drawn for an expanded view on experiments. PMID:26318436
NASA Astrophysics Data System (ADS)
Qian, Shan-Jie
2015-05-01
The mechanism of formation for double-peaked optical outbursts observed in blazar OJ 287 is studied. It is shown that they could be explained in terms of a lighthouse effect for superluminal optical knots ejected from the center of the galaxy that move along helical magnetic fields. It is assumed that the orbital motion of the secondary black hole in the supermassive binary black hole system induces the 12-year quasi-periodicity in major optical outbursts by the interaction with the disk around the primary black hole. This interaction between the secondary black hole and the disk of the primary black hole (e.g. tidal effects or magnetic coupling) excites or injects plasmons (or relativistic plasmas plus magnetic field) into the jet which form superluminal knots. These knots are assumed to move along helical magnetic field lines to produce the optical double-peaked outbursts by the lighthouse effect. The four double-peaked outbursts observed in 1972, 1983, 1995 and 2005 are simulated using this model. It is shown that such lighthouse models are quite plausible and feasible for fitting the double-flaring behavior of the outbursts. The main requirement may be that in OJ 287 there exists a rather long (~40-60 pc) highly collimated zone, where the lighthouse effect occurs.
NASA Astrophysics Data System (ADS)
Cullen, John M.; Zerner, Michael C.
1982-10-01
From the diagrammatic construction of the full coupled-cluster theory of all single and double excitations, a linearized theory, a direct configuration interaction theory (CISD), a CEPA-like theory, and a linked singles and doubles (LSD) theory are separated. These theories are then compared with one another, with the results from full fourth-order perturbation theory, and with exact results when available. The LSD model, corresponding to the removal of unlinked terms of the CISD, and its spin adapted version, appear most accurate in Pariser-Parr-Pople studies where the exact numbers are known. Examples within the localized bond model are given indicating that this model is also the most successful of those examined in generating not only the basis set correlation, but the necessary delocalization and polarization required to correct for the zeroth-order local description.
Kromer, M.; Sim, S. A.; Fink, M.; Roepke, F. K.; Seitenzahl, I. R.; Hillebrandt, W.
2010-08-20
In the double-detonation scenario for Type Ia supernovae, it is suggested that a detonation initiates in a shell of helium-rich material accreted from a companion star by a sub-Chandrasekhar-mass white dwarf. This shell detonation drives a shock front into the carbon-oxygen white dwarf that triggers a secondary detonation in the core. The core detonation results in a complete disruption of the white dwarf. Earlier studies concluded that this scenario has difficulties in accounting for the observed properties of Type Ia supernovae since the explosion ejecta are surrounded by the products of explosive helium burning in the shell. Recently, however, it was proposed that detonations might be possible for much less massive helium shells than previously assumed (Bildsten et al.). Moreover, it was shown that even detonations of these minimum helium shell masses robustly trigger detonations of the carbon-oxygen core (Fink et al.). Therefore, it is possible that the impact of the helium layer on observables is less than previously thought. Here, we present time-dependent multi-wavelength radiative transfer calculations for models with minimum helium shell mass and derive synthetic observables for both the optical and {gamma}-ray spectral regions. These differ strongly from those found in earlier simulations of sub-Chandrasekhar-mass explosions in which more massive helium shells were considered. Our models predict light curves that cover both the range of brightnesses and the rise and decline times of observed Type Ia supernovae. However, their colors and spectra do not match the observations. In particular, their B - V colors are generally too red. We show that this discrepancy is mainly due to the composition of the burning products of the helium shell of the Fink et al. models which contain significant amounts of titanium and chromium. Using a toy model, we also show that the burning products of the helium shell depend crucially on its initial composition. This leads us
Sivachenko, Anna; Gordon, Hannah B.; Kimball, Suzanne S.; Gavin, Erin J.; Bonkowsky, Joshua L.; Letsou, Anthea
2016-01-01
ABSTRACT Debilitating neurodegenerative conditions with metabolic origins affect millions of individuals worldwide. Still, for most of these neurometabolic disorders there are neither cures nor disease-modifying therapies, and novel animal models are needed for elucidation of disease pathology and identification of potential therapeutic agents. To date, metabolic neurodegenerative disease has been modeled in animals with only limited success, in part because existing models constitute analyses of single mutants and have thus overlooked potential redundancy within metabolic gene pathways associated with disease. Here, we present the first analysis of a very-long-chain acyl-CoA synthetase (ACS) double mutant. We show that the Drosophila bubblegum (bgm) and double bubble (dbb) genes have overlapping functions, and that the consequences of double knockout of both bubblegum and double bubble in the fly brain are profound, affecting behavior and brain morphology, and providing the best paradigm to date for an animal model of adrenoleukodystrophy (ALD), a fatal childhood neurodegenerative disease associated with the accumulation of very-long-chain fatty acids. Using this more fully penetrant model of disease to interrogate brain morphology at the level of electron microscopy, we show that dysregulation of fatty acid metabolism via disruption of ACS function in vivo is causal of neurodegenerative pathologies that are evident in both neuronal cells and their supporting cell populations, and leads ultimately to lytic cell death in affected areas of the brain. Finally, in an extension of our model system to the study of human disease, we describe our identification of an individual with leukodystrophy who harbors a rare mutation in SLC27a6 (encoding a very-long-chain ACS), a human homolog of bgm and dbb. PMID:26893370
NASA Astrophysics Data System (ADS)
Gratia, Pierre; Hu, Wayne; Joyce, Austin; Ribeiro, Raquel H.
2016-06-01
Attempts to modify gravity in the infrared typically require a screening mechanism to ensure consistency with local tests of gravity. These screening mechanisms fit into three broad classes; we investigate theories which are capable of exhibiting more than one type of screening. Specifically, we focus on a simple model which exhibits both Vainshtein and kinetic screening. We point out that due to the two characteristic length scales in the problem, the type of screening that dominates depends on the mass of the sourcing object, allowing for different phenomenology at different scales. We consider embedding this double screening phenomenology in a broader cosmological scenario and show that the simplest examples that exhibit double screening are radiatively stable.
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.
Convolutional neural networks for P300 detection with application to brain-computer interfaces.
Cecotti, Hubert; Gräser, Axel
2011-03-01
A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the direct communication between human and computers by analyzing brain measurements. Oddball paradigms are used in BCI to generate event-related potentials (ERPs), like the P300 wave, on targets selected by the user. A P300 speller is based on this principle, where the detection of P300 waves allows the user to write characters. The P300 speller is composed of two classification problems. The first classification is to detect the presence of a P300 in the electroencephalogram (EEG). The second one corresponds to the combination of different P300 responses for determining the right character to spell. A new method for the detection of P300 waves is presented. This model is based on a convolutional neural network (CNN). The topology of the network is adapted to the detection of P300 waves in the time domain. Seven classifiers based on the CNN are proposed: four single classifiers with different features set and three multiclassifiers. These models are tested and compared on the Data set II of the third BCI competition. The best result is obtained with a multiclassifier solution with a recognition rate of 95.5 percent, without channel selection before the classification. The proposed approach provides also a new way for analyzing brain activities due to the receptive field of the CNN models. PMID:20567055
NASA Astrophysics Data System (ADS)
Qiu, Linjing; Liu, Xiaodong
2016-04-01
Increases in the atmospheric CO2 concentration affect both the global climate and plant metabolism, particularly for high-altitude ecosystems. Because of the limitations of field experiments, it is difficult to evaluate the responses of vegetation to CO2 increases and separate the effects of CO2 and associated climate change using direct observations at a regional scale. Here, we used the Community Earth System Model (CESM, version 1.0.4) to examine these effects. Initiated from bare ground, we simulated the vegetation composition and productivity under two CO2 concentrations (367 and 734 ppm) and associated climate conditions to separate the comparative contributions of doubled CO2 and CO2-induced climate change to the vegetation dynamics on the Tibetan Plateau (TP). The results revealed whether the individual effect of doubled CO2 and its induced climate change or their combined effects caused a decrease in the foliage projective cover (FPC) of C3 arctic grass on the TP. Both doubled CO2 and climate change had a positive effect on the FPC of the temperate and tropical tree plant functional types (PFTs) on the TP, but doubled CO2 led to FPC decreases of C4 grass and broadleaf deciduous shrubs, whereas the climate change resulted in FPC decrease in C3 non-arctic grass and boreal needleleaf evergreen trees. Although the combination of the doubled CO2 and associated climate change increased the area-averaged leaf area index (LAI), the effect of doubled CO2 on the LAI increase (95 %) was larger than the effect of CO2-induced climate change (5 %). Similarly, the simulated gross primary productivity (GPP) and net primary productivity (NPP) were primarily sensitive to the doubled CO2, compared with the CO2-induced climate change, which alone increased the regional GPP and NPP by 251.22 and 87.79 g C m-2 year-1, respectively. Regionally, the vegetation response was most noticeable in the south-eastern TP. Although both doubled CO2 and associated climate change had a
Digital Elevation Models Aid the Analysis of Double Layered Ejecta (DLE) Impact Craters on Mars
NASA Astrophysics Data System (ADS)
Mouginis-Mark, P. J.; Boyce, J. M.; Garbeil, H.
2014-12-01
Considerable debate has recently taken place concerning the origin of the inner and outer ejecta layers of double layered ejecta (DLE) craters on Mars. For craters in the diameter range ~10 to ~25 km, the inner ejecta layer of DLE craters displays characteristic grooves extending from the rim crest, and has led investigators to propose three hypotheses for their formation: (1) deposition of the primary ejecta and subsequent surface scouring by either atmospheric vortices or a base surge; (2) emplacement through a landslide of the near-rim crest ejecta; and (3) instabilities (similar to Gortler vortices) generated by high flow-rate, and high granular temperatures. Critical to resolving between these models is the topographic expression of both the ejecta layer and the groove geometry. To address this problem, we have made several digital elevation models (DEMs) from CTX and HiRISE stereo pairs using the Ames Stereo Pipeline at scales of 24 m/pixel and 1 m/pixel, respectively. These DEMs allow several key observations to be made that bear directly upon the origin of the grooves associated with DLE craters: (1) Grooves formed on the sloping ejecta layer surfaces right up to the preserved crater rim; (2) There is clear evidence that grooves traverse the topographic boundary between the inner and outer ejecta layers; and (3) There are at least two different sets of radial grooves, with smaller grooves imprinted upon the larger grooves. There are "deep-wide" grooves that have a width of ~200 m and a depth of ~10 m, and there are "shallow-narrow" grooves with a width of <50 m and depth <5 m. These two scales of grooves are not consistent with their formation analogous to a landslide. Two different sets of grooves would imply that, simultaneously, two different depths to the flow would have to exist if the grooves were formed by shear within the flow, something that is not physically possible. All three observations can only be consistent with a model of groove formation
Gopishankar, N; Bisht, R K
2014-06-01
Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.
The electronic states of a double carbon vacancy defect in pyrene: a model study for graphene.
Machado, Francisco B C; Aquino, Adélia J A; Lischka, Hans
2015-05-21
The electronic states occurring in a double vacancy defect for graphene nanoribbons have been calculated in detail based on a pyrene model. Extended ab initio calculations using the MR configuration interaction (MRCI) method have been performed to describe in a balanced way the manifold of electronic states derived from the dangling bonds created by initial removal of two neighboring carbon atoms from the graphene network. In total, this study took into account the characterization of 16 electronic states (eight singlets and eight triplets) considering unrelaxed and relaxed defect structures. The ground state was found to be of (1)Ag character with around 50% closed shell character. The geometry optimization process leads to the formation of two five-membered rings in a pentagon-octagon-pentagon (5-8-5) structure. The closed shell character increases thereby to ∼70%; the analysis of unpaired density shows only small contributions confirming the chemical stability of that entity. For the unrelaxed structure the first five excited states ((3)B3g, (3)B2u, (3)B1u, (3)Au and (1)Au) are separated from the ground state by less than 2.5 eV. For comparison, unrestricted density functional theory (DFT) calculations using several types of functionals have been performed within different symmetry subspaces defined by the open shell orbitals. Comparison with the MRCI results gave good agreement in terms of finding the (1)Ag state as a ground state and in assigning the lowest excited states. Linear interpolation curves between the unrelaxed and relaxed defect structures also showed good agreement between the two classes of methods opening up the possibilities of using extended nanoflakes for multistate investigations at the DFT level. PMID:25905682
Deep convolutional networks for pancreas segmentation in CT imaging
NASA Astrophysics Data System (ADS)
Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.
2015-03-01
Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.
A convolution-superposition dose calculation engine for GPUs
Hissoiny, Sami; Ozell, Benoit; Despres, Philippe
2010-03-15
Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.
Black, Dolores A.; Robinson, William H.; Limbrick, Daniel B.; Black, Jeffrey D.; Wilcox, Ian Z.
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. An accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. A small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. Furthermore, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; Limbrick, Daniel B.; Black, Jeffrey D.
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; Limbrick, Daniel B.; Black, Jeffrey D.
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
Wang, Hailong; Ho, Derek Y H; Lawton, Wayne; Wang, Jiao; Gong, Jiangbin
2013-11-01
Recent studies have established that, in addition to the well-known kicked-Harper model (KHM), an on-resonance double-kicked rotor (ORDKR) model also has Hofstadter's butterfly Floquet spectrum, with strong resemblance to the standard Hofstadter spectrum that is a paradigm in studies of the integer quantum Hall effect. Earlier it was shown that the quasienergy spectra of these two dynamical models (i) can exactly overlap with each other if an effective Planck constant takes irrational multiples of 2π and (ii) will be different if the same parameter takes rational multiples of 2π. This work makes detailed comparisons between these two models, with an effective Planck constant given by 2πM/N, where M and N are coprime and odd integers. It is found that the ORDKR spectrum (with two periodic kicking sequences having the same kick strength) has one flat band and N-1 nonflat bands with the largest bandwidth decaying in a power law as ~K(N+2), where K is a kick strength parameter. The existence of a flat band is strictly proven and the power-law scaling, numerically checked for a number of cases, is also analytically proven for a three-band case. By contrast, the KHM does not have any flat band and its bandwidths scale linearly with K. This is shown to result in dramatic differences in dynamical behavior, such as transient (but extremely long) dynamical localization in ORDKR, which is absent in the KHM. Finally, we show that despite these differences, there exist simple extensions of the KHM and ORDKR model (upon introducing an additional periodic phase parameter) such that the resulting extended KHM and ORDKR model are actually topologically equivalent, i.e., they yield exactly the same Floquet-band Chern numbers and display topological phase transitions at the same kick strengths. A theoretical derivation of this topological equivalence is provided. These results are also of interest to our current understanding of quantum-classical correspondence considering that
Cho, Edward Namkyu; Shin, Yong Hyeon; Yun, Ilgu
2014-11-07
A compact quantum correction model for a symmetric double gate (DG) metal-oxide-semiconductor field-effect transistor (MOSFET) is investigated. The compact quantum correction model is proposed from the concepts of the threshold voltage shift (ΔV{sub TH}{sup QM}) and the gate capacitance (C{sub g}) degradation. First of all, ΔV{sub TH}{sup QM} induced by quantum mechanical (QM) effects is modeled. The C{sub g} degradation is then modeled by introducing the inversion layer centroid. With ΔV{sub TH}{sup QM} and the C{sub g} degradation, the QM effects are implemented in previously reported classical model and a comparison between the proposed quantum correction model and numerical simulation results is presented. Based on the results, the proposed quantum correction model can be applicable to the compact model of DG MOSFET.
NASA Technical Reports Server (NTRS)
Sue, M. K.
1981-01-01
Models to characterize the behavior of the Deep Space Network (DSN) Receiving System in the presence of a radio frequency interference (RFI) are considered. A simple method to evaluate the telemetry degradation due to the presence of a CW RFI near the carrier frequency for the DSN Block 4 Receiving System using the maximum likelihood convolutional decoding assembly is presented. Analytical and experimental results are given.
NASA Technical Reports Server (NTRS)
Kuan, Gary M.; Dekens, Frank G.
2006-01-01
The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.
NASA Astrophysics Data System (ADS)
Buchstaber, V. M.; Tertychnyi, S. I.
2015-03-01
This work is a continuation of research on a first-order nonlinear differential equation applied in the overshunted model of the Josephson junction. The approach is based on the relation between this equation and the double confluent Heun equation, which is a second-order linear homogeneous equation with two irregular singular points. We describe the conditions on the equation parameters under which its general solution is an analytic function on the Riemann sphere except at 0 and ∞. We construct an explicit basis of the solution space. One of the functions in this basis is regular everywhere except 0, and the other is regular everywhere except ∞. We show that in the framework of the RSJ model of Josephson junction dynamics, the described situation corresponds to the condition that the Shapiro step vanishes if all the solutions of the double confluent Heun equation are single-valued on the Riemann sphere without 0 and ∞.
Galvão, Tiago L P; Neves, Cristina S; Caetano, Ana P F; Maia, Frederico; Mata, Diogo; Malheiro, Eliana; Ferreira, Maria J; Bastos, Alexandre C; Salak, Andrei N; Gomes, José R B; Tedim, João; Ferreira, Mário G S
2016-04-15
Zinc-aluminum layered double hydroxides with nitrate intercalated (Zn(n)Al-NO3, n=Zn/Al) is an intermediate material for the intercalation of different functional molecules used in a wide range of industrial applications. The synthesis of Zn(2)Al-NO3 was investigated considering the time and temperature of hydrothermal treatment. By examining the crystallite size in two different directions, hydrodynamic particle size, morphology, crystal structure and chemical species in solution, it was possible to understand the crystallization and dissolution processes involved in the mechanisms of crystallite and particle growth. In addition, hydrogeochemical modeling rendered insights on the speciation of different metal cations in solution. Therefore, this tool can be a promising solution to model and optimize the synthesis of layered double hydroxide-based materials for industrial applications. PMID:26828278
Davy, John L
2010-02-01
This paper presents a revised theory for predicting the sound insulation of double leaf cavity walls that removes an approximation, which is usually made when deriving the sound insulation of a double leaf cavity wall above the critical frequencies of the wall leaves due to the airborne transmission across the wall cavity. This revised theory is also used as a correction below the critical frequencies of the wall leaves instead of a correction due to Sewell [(1970). J. Sound Vib. 12, 21-32]. It is found necessary to include the "stud" borne transmission of the window frames when modeling wide air gap double glazed windows. A minimum value of stud transmission is introduced for use with resilient connections such as steel studs. Empirical equations are derived for predicting the effective sound absorption coefficient of wall cavities without sound absorbing material. The theory is compared with experimental results for double glazed windows and gypsum plasterboard cavity walls with and without sound absorbing material in their cavities. The overall mean, standard deviation, maximum, and minimum of the differences between experiment and theory are -0.6 dB, 3.1 dB, 10.9 dB at 1250 Hz, and -14.9 dB at 160 Hz, respectively. PMID:20136207
NASA Astrophysics Data System (ADS)
Capuano, Paolo; De Lauro, Enza; De Martino, Salvatore; Falanga, Mariarosaria; Petrosino, Simona
2015-04-01
One of the main challenge in volcano-seismological literature is to locate and characterize the source of volcano/tectonic seismic activity. This passes through the identification at least of the onset of the main phases, i.e. the body waves. Many efforts have been made to solve the problem of a clear separation of P and S phases both from a theoretical point of view and developing numerical algorithms suitable for specific cases (see, e.g., Küperkoch et al., 2012). Recently, a robust automatic procedure has been implemented for extracting the prominent seismic waveforms from continuously recorded signals and thus allowing for picking the main phases. The intuitive notion of maximum non-gaussianity is achieved adopting techniques which involve higher-order statistics in frequency domain., i.e, the Convolutive Independent Component Analysis (CICA). This technique is successful in the case of the blind source separation of convolutive mixtures. In seismological framework, indeed, seismic signals are thought as the convolution of a source function with path, site and the instrument response. In addition, time-delayed versions of the same source exist, due to multipath propagation typically caused by reverberations from some obstacle. In this work, we focus on the Volcano Tectonic (VT) activity at Campi Flegrei Caldera (Italy) during the 2006 ground uplift (Ciaramella et al., 2011). The activity was characterized approximately by 300 low-magnitude VT earthquakes (Md < 2; for the definition of duration magnitude, see Petrosino et al. 2008). Most of them were concentrated in distinct seismic sequences with hypocenters mainly clustered beneath the Solfatara-Accademia area, at depths ranging between 1 and 4 km b.s.l.. The obtained results show the clear separation of P and S phases: the technique not only allows the identification of the S-P time delay giving the timing of both phases but also provides the independent waveforms of the P and S phases. This is an enormous
NASA Astrophysics Data System (ADS)
Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.
2016-04-01
This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of
Double-porosity models for a fissured groundwater reservoir with fracture skin.
Moench, A.F.
1984-01-01
Theories of flow to a well in a double-porosity groundwater reservoir are modified to incorporate effects of a thin layer of low-permeability material or fracture skin that may be present at fracture-block interfaces as a result of mineral deposition or alteration. The commonly used theory for flow in double-porosity formations that is based upon the assumption of pseudo-steady state block-to-fissure flow is shown to be a special case of the theory presented in this paper. The latter is based on the assumption of transient block-to-fissure flow with fracture skin.-from Author
Black, Dolores A.; Robinson, William H.; Limbrick, Daniel B.; Black, Jeffrey D.; Wilcox, Ian Z.
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. An accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional modelmore » based on one double-exponential source can be incomplete. A small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. Furthermore, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
NASA Astrophysics Data System (ADS)
Mackay, R. M.; Khalil, M. A. K.
1995-10-01
The zonally averaged response of the Global Change Research Center two-dimensional (2-D) statistical dynamical climate model (GCRC 2-D SDCM) to a doubling of atmospheric carbon dioxide (350 parts per million by volume (ppmv) to 700 ppmv) is reported. The model solves the two-dimensional primitive equations in finite difference form (mass continuity, Newton's second law, and the first law of thermodynamics) for the prognostic variables: zonal mean density, zonal mean zonal velocity, zonal mean meridional velocity, and zonal mean temperature on a grid that has 18 nodes in latitude and 9 vertical nodes (plus the surface). The equation of state, p=ρRT, and an assumed hydrostatic atmosphere, Delta;p=-ρgΔz, are used to diagnostically calculate the zonal mean pressure and vertical velocity for each grid node, and the moisture balance equation is used to estimate the precipitation rate. The model includes seasonal variations in solar intensity, including the effects of eccentricity, and has observed land and ocean fractions set for each zone. Seasonally varying values of cloud amounts, relative humidity profiles, ozone, and sea ice are all prescribed in the model. Equator to pole ocean heat transport is simulated in the model by turbulent diffusion. The change in global mean annual surface air temperature due to a doubling of atmospheric CO2 in the 2-D model is 1.61 K, which is close to that simulated by the one-dimensional (1-D) radiative convective model (RCM) which is at the heart of the 2-D model radiation code (1.67 K for the moist adiabatic lapse rate assumption in 1-D RCM). We find that the change in temperature structure of the model atmosphere has many of the characteristics common to General Circulation Models, including amplified warming at the poles and the upper tropical troposphere, and stratospheric cooling. Because of the potential importance of atmospheric circulation feedbacks on climate change, we have also investigated the response of the zonal wind
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Arévalo, John; Judkins, Alexander; Madabhushi, Anant; González, Fabio
2015-12-01
Convolutional neural networks (CNN) have been very successful at addressing different computer vision tasks thanks to their ability to learn image representations directly from large amounts of labeled data. Features learned from a dataset can be used to represent images from a different dataset via an approach called transfer learning. In this paper we apply transfer learning to the challenging task of medulloblastoma tumor differentiation. We compare two different CNN models which were previously trained in two different domains (natural and histopathology images). The first CNN is a state-of-the-art approach in computer vision, a large and deep CNN with 16-layers, Visual Geometry Group (VGG) CNN. The second (IBCa-CNN) is a 2-layer CNN trained for invasive breast cancer tumor classification. Both CNNs are used as visual feature extractors of histopathology image regions of anaplastic and non-anaplastic medulloblastoma tumor from digitized whole-slide images. The features from the two models are used, separately, to train a softmax classifier to discriminate between anaplastic and non-anaplastic medulloblastoma image regions. Experimental results show that the transfer learning approach produce competitive results in comparison with the state of the art approaches for IBCa detection. Results also show that features extracted from the IBCa-CNN have better performance in comparison with features extracted from the VGG-CNN. The former obtains 89.8% while the latter obtains 76.6% in terms of average accuracy.
ERIC Educational Resources Information Center
Jaubert, Jean-Noël; Privat, Romain
2014-01-01
The double-tangent construction of coexisting phases is an elegant approach to visualize all the multiphase binary systems that satisfy the equality of chemical potentials and to select the stable state. In this paper, we show how to perform the double-tangent construction of coexisting phases for binary systems modeled with the gamma-phi…
Ledreux, Aurélie; Boger, Heather A; Hinson, Vanessa K; Cantwell, Kelsey; Granholm, Ann-Charlotte
2016-01-15
The anti-Parkinsonian drug rasagiline is a selective, irreversible inhibitor of monoamine oxidase and is used in the treatment of Parkinson׳s disease (PD). Its postulated neuroprotective effects may be attributed to MAO inhibition, or to its propargylamine moiety. The major metabolite of rasagiline, aminoindan, has shown promising neuroprotective properties in vitro but there is a paucity of studies investigating in vivo effects of this compound. Therefore, we examined neuroprotective effects of rasagiline and its metabolite aminoindan in a double lesion model of PD. Male Fisher 344 rats received i.p. injections of the noradrenergic neurotoxin DSP-4 and intra-striatal stereotaxic microinjections of the dopamine neurotoxin 6-OHDA. Saline, rasagiline or aminoindan (3mg/kg/day s.c.) were delivered via Alzet minipumps for 4 weeks. Rats were then tested for spontaneous locomotion and a novel object recognition task. Following behavioral testing, brain tissue was processed for ELISA measurements of growth factors and immunohistochemistry. Double-lesioned rats treated with rasagiline or aminoindan had reduced behavioral deficits, both in motor and cognitive tasks compared to saline-treated double-lesioned rats. BDNF levels were significantly increased in the hippocampus and striatum of the rasagiline- and aminoindan-lesioned groups compared to the saline-treated lesioned group. Double-lesioned rats treated with rasagiline or aminoindan exhibited a sparing in the mitochondrial marker Hsp60, suggesting mitochondrial involvement in neuroprotection. Tyrosine hydroxylase (TH) immunohistochemistry revealed a sparing of TH-immunoreactive terminals in double-lesioned rats treated with rasagiline or aminoindan in the striatum, hippocampus, and substantia nigra. These data provide evidence of neuroprotection by aminoindan and rasagiline via their ability to enhance BDNF levels. PMID:26607251
A statistical model for QTL mapping in polysomic autotetraploids underlying double reduction
Technology Transfer Automated Retrieval System (TEKTRAN)
Technical Abstract: As a group of economically important species, linkage mapping of polysomic autotetraploids, including potato, sugarcane and rose, is difficult to conduct due to their unique meiotic property of double reduction that allows sister chromatids to enter into the same gamete. We desc...
Isami, Shuhei; Sakamoto, Naoaki; Nishimori, Hiraku; Awazu, Akinori
2015-01-01
Simple elastic network models of DNA were developed to reveal the structure-dynamics relationships for several nucleotide sequences. First, we propose a simple all-atom elastic network model of DNA that can explain the profiles of temperature factors for several crystal structures of DNA. Second, we propose a coarse-grained elastic network model of DNA, where each nucleotide is described only by one node. This model could effectively reproduce the detailed dynamics obtained with the all-atom elastic network model according to the sequence-dependent geometry. Through normal-mode analysis for the coarse-grained elastic network model, we exhaustively analyzed the dynamic features of a large number of long DNA sequences, approximately ∼150 bp in length. These analyses revealed positive correlations between the nucleosome-forming abilities and the inter-strand fluctuation strength of double-stranded DNA for several DNA sequences. PMID:26624614
NASA Astrophysics Data System (ADS)
Kano, Shinya; Maeda, Kosuke; Tanaka, Daisuke; Sakamoto, Masanori; Teranishi, Toshiharu; Majima, Yutaka
2015-10-01
We present the analysis of chemically assembled double-dot single-electron transistors using orthodox model considering offset charges. First, we fabricate chemically assembled single-electron transistors (SETs) consisting of two Au nanoparticles between electroless Au-plated nanogap electrodes. Then, extraordinary stable Coulomb diamonds in the double-dot SETs are analyzed using the orthodox model, by considering offset charges on the respective quantum dots. We determine the equivalent circuit parameters from Coulomb diamonds and drain current vs. drain voltage curves of the SETs. The accuracies of the capacitances and offset charges on the quantum dots are within ±10%, and ±0.04e (where e is the elementary charge), respectively. The parameters can be explained by the geometrical structures of the SETs observed using scanning electron microscopy images. Using this approach, we are able to understand the spatial characteristics of the double quantum dots, such as the relative distance from the gate electrode and the conditions for adsorption between the nanogap electrodes.
Kano, Shinya; Maeda, Kosuke; Majima, Yutaka; Tanaka, Daisuke; Sakamoto, Masanori; Teranishi, Toshiharu
2015-10-07
We present the analysis of chemically assembled double-dot single-electron transistors using orthodox model considering offset charges. First, we fabricate chemically assembled single-electron transistors (SETs) consisting of two Au nanoparticles between electroless Au-plated nanogap electrodes. Then, extraordinary stable Coulomb diamonds in the double-dot SETs are analyzed using the orthodox model, by considering offset charges on the respective quantum dots. We determine the equivalent circuit parameters from Coulomb diamonds and drain current vs. drain voltage curves of the SETs. The accuracies of the capacitances and offset charges on the quantum dots are within ±10%, and ±0.04e (where e is the elementary charge), respectively. The parameters can be explained by the geometrical structures of the SETs observed using scanning electron microscopy images. Using this approach, we are able to understand the spatial characteristics of the double quantum dots, such as the relative distance from the gate electrode and the conditions for adsorption between the nanogap electrodes.
NASA Astrophysics Data System (ADS)
Kanemura, Shinya; Kaneta, Kunio; Machida, Naoki; Odori, Shinya; Shindou, Tetsuo
2016-07-01
In the composite Higgs models, originally proposed by Georgi and Kaplan, the Higgs boson is a pseudo Nambu-Goldstone boson (pNGB) of spontaneous breaking of a global symmetry. In the minimal version of such models, global SO(5) symmetry is spontaneously broken to SO(4), and the pNGBs form an isospin doublet field, which corresponds to the Higgs doublet in the Standard Model (SM). Predicted coupling constants of the Higgs boson can in general deviate from the SM predictions, depending on the compositeness parameter. The deviation pattern is determined also by the detail of the matter sector. We comprehensively study how the model can be tested via measuring single and double production processes of the Higgs boson at the LHC and future electron-positron colliders. The possibility to distinguish the matter sector among the minimal composite Higgs models is also discussed. In addition, we point out differences in the cross section of double Higgs boson production from the prediction in other new physics models.
NASA Astrophysics Data System (ADS)
Civitarese, O.; Suhonen, J.; Zuber, K.
2015-07-01
The minimal extension of the standard model of electroweak interactions allows for massive neutrinos, a massive right-handed boson WR, and a left-right mixing angle ζ. While an estimate of the light (electron) neutrino can be extracted from the non-observation of the neutrinoless double beta decay, the limits on the mixing angle and the mass of the righthanded (RH) boson may be extracted from a combined analysis of the double beta decay measurements (GERDA, EXO-200 and KamLAND-Zen collaborations) and ATLAS data on the two-jets two-leptons signals following the excitation of a virtual RH boson mediated by a heavy-mass neutrino. In this work we shall compare results of both types of experiments, and show that the estimates are not in tension.
Giugno, Lorena; Lavazza, Antonio; Reiter, Russel J.; Rodella, Luigi Fabrizio; Rezzani, Rita
2014-01-01
Obesity is a common and complex health problem, which impacts crucial organs; it is also considered an independent risk factor for chronic kidney disease. Few studies have analyzed the consequence of obesity in the renal proximal convoluted tubules, which are the major tubules involved in reabsorptive processes. For optimal performance of the kidney, energy is primarily provided by mitochondria. Melatonin, an indoleamine and antioxidant, has been identified in mitochondria, and there is considerable evidence regarding its essential role in the prevention of oxidative mitochondrial damage. In this study we evaluated the mechanism(s) of mitochondrial alterations in an animal model of obesity (ob/ob mice) and describe the beneficial effects of melatonin treatment on mitochondrial morphology and dynamics as influenced by mitofusin-2 and the intrinsic apoptotic cascade. Melatonin dissolved in 1% ethanol was added to the drinking water from postnatal week 5–13; the calculated dose of melatonin intake was 100 mg/kg body weight/day. Compared to control mice, obesity-related morphological alterations were apparent in the proximal tubules which contained round mitochondria with irregular, short cristae and cells with elevated apoptotic index. Melatonin supplementation in obese mice changed mitochondria shape and cristae organization of proximal tubules, enhanced mitofusin-2 expression, which in turn modulated the progression of the mitochondria-driven intrinsic apoptotic pathway. These changes possibly aid in reducing renal failure. The melatonin-mediated changes indicate its potential protective use against renal morphological damage and dysfunction associated with obesity and metabolic disease. PMID:25347680
Convolution effect on TCR log response curve and the correction method for it
NASA Astrophysics Data System (ADS)
Chen, Q.; Liu, L. J.; Gao, J.
2016-09-01
Through-casing resistivity (TCR) logging has been successfully used in production wells for the dynamic monitoring of oil pools and the distribution of the residual oil, but its vertical resolution has limited its efficiency in identification of thin beds. The vertical resolution is limited by the distortion phenomenon of vertical response of TCR logging. The distortion phenomenon was studied in this work. It was found that the vertical response curve of TCR logging is the convolution of the true formation resistivity and the convolution function of TCR logging tool. Due to the effect of convolution, the measurement error at thin beds can reach 30% or even bigger. Thus the information of thin bed might be covered up very likely. The convolution function of TCR logging tool was obtained in both continuous and discrete way in this work. Through modified Lyle-Kalman deconvolution method, the true formation resistivity can be optimally estimated, so this inverse algorithm can correct the error caused by the convolution effect. Thus it can improve the vertical resolution of TCR logging tool for identification of thin beds.
Iterative sinc-convolution method for solving planar D-bar equation with application to EIT.
Abbasi, Mahdi; Naghsh-Nilchi, Ahmad-Reza
2012-08-01
The numerical solution of D-bar integral equations is the key in inverse scattering solution of many complex problems in science and engineering including conductivity imaging. Recently, a couple of methodologies were considered for the numerical solution of D-bar integral equation, namely product integrals and multigrid. The first one involves high computational complexity and other one has low convergence rate disadvantages. In this paper, a new and efficient sinc-convolution algorithm is introduced to solve the two-dimensional D-bar integral equation to overcome both of these disadvantages and to resolve the singularity problem not tackled before effectively. The method of sinc-convolution is based on using collocation to replace multidimensional convolution-form integrals- including the two-dimensional D-bar integral equations - by a system of algebraic equations. Separation of variables in the proposed method allows elimination of the formulation of the huge full matrices and therefore reduces the computational complexity drastically. In addition, the sinc-convolution method converges exponentially with a convergence rate of O(e-cN). Simulation results on solving a test electrical impedance tomography problem confirm the efficiency of the proposed sinc-convolution-based algorithm. PMID:25099566
A one-parameter family of transforms, linearizing convolution laws for probability distributions
NASA Astrophysics Data System (ADS)
Nica, Alexandru
1995-03-01
We study a family of transforms, depending on a parameter q∈[0,1], which interpolate (in an algebraic framework) between a relative (namely: - iz(log ℱ(·)) '(-iz)) of the logarithm of the Fourier transform for probability distributions, and its free analogue constructed by D. Voiculescu ([16, 17]). The classical case corresponds to q=1, and the free one to q=0. We describe these interpolated transforms: (a) in terms of partitions of finite sets, and their crossings; (b) in terms of weighted shifts; (c) by a matrix equation related to the method of Stieltjes for expanding continued J-fractions as power series. The main result of the paper is that all these descriptions, which extend basic approaches used for q=0 and/or q=1, remain equivalent for arbitrary q∈[0, 1]. We discuss a couple of basic properties of the convolution laws (for probability distributions) which are linearized by the considered family of transforms (these convolution laws interpolate between the usual convolution — at q=1, and the free convolution introduced by Voiculescu — at q=0). In particular, we note that description (c) mentioned in the preceding paragraph gives an insight of why the central limit law for the interpolated convolution has to do with the q-continuous Hermite orthogonal polynomials.
NASA Astrophysics Data System (ADS)
Liu, Qing; He, Ya-Ling
2015-11-01
In this paper, a double multiple-relaxation-time lattice Boltzmann model is developed for simulating transient solid-liquid phase change problems in porous media at the representative elementary volume scale. The model uses two different multiple-relaxation-time lattice Boltzmann equations, one for the flow field and the other for the temperature field with nonlinear latent heat source term. The model is based on the generalized non-Darcy formulation, and the solid-liquid interface is traced through the liquid fraction which is determined by the enthalpy-based method. The present model is validated by numerical simulations of conduction melting in a semi-infinite space, solidification in a semi-infinite corner, and convection melting in a square cavity filled with porous media. The numerical results demonstrate the efficiency and accuracy of the present model for simulating transient solid-liquid phase change problems in porous media.
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Einaudi, Franco (Technical Monitor)
2001-01-01
It has been known for more than a decade that an aqua-planet model with globally uniform sea surface temperature and solar insolation angle can generate ITCZ (intertropical convergence zone). Previous studies have shown that the ITCZ under such model settings can be changed between a single ITCZ over the equator and a double ITCZ straddling the equator through one of several measures. These measures include switching to a different cumulus parameterization scheme, changes within the cumulus parameterization scheme, and changes in other aspects of the model design such as horizontal resolution. In this paper an interpretation for these findings is offered. The latitudinal location of the ITCZ is the latitude where the balance of two types of attraction on the ITCZ, both due to earth's rotation, exists. The first type is equator-ward and is directly related to the earth's rotation and thus not sensitive to model design changes. The second type is poleward and is related to the convective circulation and thus is sensitive to model design changes. Due to the shape of the attractors, the balance of the two types of attractions is reached either at the equator or more than 10 degrees away from the equator. The former case results in a single ITCZ over the equator and the latter case a double ITCZ straddling the equator.
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.
1997-01-01
Melt convection, along with species diffusion and segregation on the solidification interface are the primary factors responsible for species redistribution during HgCdTe crystal growth from the melt. As no direct information about convection velocity is available, numerical modeling is a logical approach to estimate convection. Furthermore influence of microgravity level, double-diffusion and material properties should be taken into account. In the present study, HgCdTe is considered as a binary alloy with melting temperature available from a phase diagram. The numerical model of convection and solidification of binary alloy is based on the general equations of heat and mass transfer in two-dimensional region. Mathematical modeling of binary alloy solidification is still a challenging numericial problem. A Rigorous mathematical approach to this problem is available only when convection is not considered at all. The proposed numerical model was developed using the finite element code FIDAP. In the present study, the numerical model is used to consider thermal, solutal convection and a double diffusion source of mass transport.
Modeling double pulsing of ion beams for HEDP target heating experiments
NASA Astrophysics Data System (ADS)
Veitzer, Seth; Barnard, John; Stoltz, Peter; Henestroza, Enrique
2008-04-01
Recent research on direct drive targets using heavy ion beams suggests optimal coupling will occur when the energy of the ions increases over the course of the pulse. In order to experimentally explore issues involving the interaction of the beam with the outflowing blowoff from the target, double pulse experiments have been proposed whereby a first pulse heats a planar target producing an outflow of material, and a second pulse (˜10 ns later) of higher ion energy (and hence larger projected range) interacts with this outflow before reaching and further heating the target. We report here results for simulations of double pulsing experiments using HYDRA for beam and target parameters relevant to the proposed Neutralized Drift Compression Experiment (NDCX) II at LBNL.
Structural optimization and model fabrication of a double-ring deployable antenna truss
NASA Astrophysics Data System (ADS)
Dai, Lu; Guan, Fuling; Guest, James K.
2014-02-01
This paper explores the design of a new type of deployable antenna system composed of a double-ring deployable truss, prestressed cable nets, and a metallic reflector mesh. The primary novelty is the double-ring deployable truss, which is found to significantly enhance the stiffness of the entire antenna over single-ring systems with relatively low mass gain. Structural optimization was used to minimize the system mass subject to constraints on system stiffness and member section availability. Both genetic algorithms (GA) and gradient-based optimizers are employed. The optimized system results were obtained and incorporated into a 4.2-m scaled system prototype, which was then experimentally tested for dynamic properties. Practical considerations such as the maximum number of truss sides and their effects on system performances were also discussed.
NASA Astrophysics Data System (ADS)
Yan, Liang; Li, Wei; Jiao, Zongxia; Chen, I.-Ming
2015-12-01
The space utilization of linear switched reluctance machine is relatively low, which unavoidably constrains the improvement of system output performance. The objective of this paper is to propose a novel tubular linear switched reluctance motor with double excitation windings. The employment of double excitation helps to increase the electromagnetic force of the system. Furthermore, the installation of windings on both stator and mover can make the structure more compact and increase the system force density. The design concept and operating principle are presented. Following that, the major structure parameters of the system are determined. Subsequently, electromagnetic force and reluctance are formulated analytically based on equivalent magnetic circuits, and the result is validated with numerical computation. Then, a research prototype is developed, and experiments are conducted on the system output performance. It shows that the proposed design of electric linear machine can achieve higher thrust force compared with conventional linear switched reluctance machines.
Yan, Liang; Li, Wei; Jiao, Zongxia; Chen, I-Ming
2015-12-01
The space utilization of linear switched reluctance machine is relatively low, which unavoidably constrains the improvement of system output performance. The objective of this paper is to propose a novel tubular linear switched reluctance motor with double excitation windings. The employment of double excitation helps to increase the electromagnetic force of the system. Furthermore, the installation of windings on both stator and mover can make the structure more compact and increase the system force density. The design concept and operating principle are presented. Following that, the major structure parameters of the system are determined. Subsequently, electromagnetic force and reluctance are formulated analytically based on equivalent magnetic circuits, and the result is validated with numerical computation. Then, a research prototype is developed, and experiments are conducted on the system output performance. It shows that the proposed design of electric linear machine can achieve higher thrust force compared with conventional linear switched reluctance machines. PMID:26724063
Thermodynamic modelling of a double-effect LiBr-H2O absorption refrigeration cycle
NASA Astrophysics Data System (ADS)
Iranmanesh, A.; Mehrabian, M. A.
2012-12-01
The goal of this paper is to estimate the conductance of components required to achieve the approach temperatures, and gain insights into a double-effect absorption chiller using LiBr-H2O solution as the working fluid. An in-house computer program is developed to simulate the cycle. Conductance of all components is evaluated based on the approach temperatures assumed as input parameters. The effect of input data on the cycle performance and the exergetic efficiency are investigated.
Age, double porosity, and simple reaction modifications for the MOC3D ground-water transport model
Goode, Daniel J.
1999-01-01
This report documents modifications for the MOC3D ground-water transport model to simulate (a) ground-water age transport; (b) double-porosity exchange; and (c) simple but flexible retardation, decay, and zero-order growth reactions. These modifications are incorporated in MOC3D version 3.0. MOC3D simulates the transport of a single solute using the method-ofcharacteristics numerical procedure. The age of ground water, that is the time since recharge to the saturated zone, can be simulated using the transport model with an additional source term of unit strength, corresponding to the rate of aging. The output concentrations of the model are in this case the ages at all locations in the model. Double porosity generally refers to a separate immobilewater phase within the aquifer that does not contribute to ground-water flow but can affect solute transport through diffusive exchange. The solute mass exchange rate between the flowing water in the aquifer and the immobile-water phase is the product of the concentration difference between the two phases and a linear exchange coefficient. Conceptually, double porosity can approximate the effects of dead-end pores in a granular porous media, or matrix diffusion in a fractured-rock aquifer. Options are provided for decay and zero-order growth reactions within the immobilewater phase. The simple reaction terms here extend the original model, which included decay and retardation. With these extensions, (a) the retardation factor can vary spatially within each model layer, (b) the decay rate coefficient can vary spatially within each model layer and can be different for the dissolved and sorbed phases, and (c) a zero-order growth reaction is added that can vary spatially and can be different in the dissolved and sorbed phases. The decay and growth reaction terms also can change in time to account for changing geochemical conditions during transport. The report includes a description of the theoretical basis of the model, a
Theory of wave propagation in partially saturated double-porosity rocks: a triple-layer patchy model
NASA Astrophysics Data System (ADS)
Sun, Weitao; Ba, Jing; Carcione, José M.
2016-04-01
Wave-induced local fluid flow is known as a key mechanism to explain the intrinsic wave dissipation in fluid-saturated rocks. Understanding the relationship between the acoustic properties of rocks and fluid patch distributions is important to interpret the observed seismic wave phenomena. A triple-layer patchy (TLP) model is proposed to describe the P-wave dissipation process in a double-porosity media saturated with two immiscible fluids. The double-porosity rock consists of a solid matrix with unique host porosity and inclusions which contain the second type of pores. Two immiscible fluids are considered in concentric spherical patches, where the inner pocket and the outer sphere are saturated with different fluids. The kinetic and dissipation energy functions of local fluid flow (LFF) in the inner pocket are formulated through oscillations in spherical coordinates. The wave propagation equations of the TLP model are based on Biot's theory and the corresponding Lagrangian equations. The P-wave dispersion and attenuation caused by the Biot friction mechanism and the local fluid flow (related to the pore structure and the fluid distribution) are obtained by a plane-wave analysis from the Christoffel equations. Numerical examples and laboratory measurements indicate that P-wave dispersion and attenuation are significantly influenced by the spatial distributions of both, the solid heterogeneity and the fluid saturation distribution. The TLP model is in reasonably good agreement with White's and Johnson's models. However, differences in phase velocity suggest that the heterogeneities associated with double-porosity and dual-fluid distribution should be taken into account when describing the P-wave dispersion and attenuation in partially saturated rocks.
NASA Astrophysics Data System (ADS)
Hawcroft, Matt; Haywood, Jim M.; Collins, Mat; Jones, Andy; Jones, Anthony C.; Stephens, Graeme
2016-06-01
A causal link has been invoked between inter-hemispheric albedo, cross-equatorial energy transport and the double-Intertropical Convergence Zone (ITCZ) bias in climate models. Southern Ocean cloud biases are a major determinant of inter-hemispheric albedo biases in many models, including HadGEM2-ES, a fully coupled model with a dynamical ocean. In this study, targeted albedo corrections are applied in the Southern Ocean to explore the dynamical response to artificially reducing these biases. The Southern Hemisphere jet increases in strength in response to the increased tropical-extratropical temperature gradient, with increased energy transport into the mid-latitudes in the atmosphere, but no improvement is observed in the double-ITCZ bias or atmospheric cross-equatorial energy transport, a finding which supports other recent work. The majority of the adjustment in energy transport in the tropics is achieved in the ocean, with the response further limited to the Pacific Ocean. As a result, the frequently argued teleconnection between the Southern Ocean and tropical precipitation biases is muted. Further experiments in which tropical longwave biases are also reduced do not yield improvement in the representation of the tropical atmosphere. These results suggest that the dramatic improvements in tropical precipitation that have been shown in previous studies may be a function of the lack of dynamical ocean and/or the simplified hemispheric albedo bias corrections applied in that work. It further suggests that efforts to correct the double ITCZ problem in coupled models that focus on large-scale energetic controls will prove fruitless without improvements in the representation of atmospheric processes.
Parker, M. M.; Court, D. A.; Preiter, K.; Belfort, M.
1996-01-01
Many group I introns encode endonucleases that promote intron homing by initiating a double-strand break-mediated homologous recombination event. A td intron-phage λ model system was developed to analyze exon homology effects on intron homing and determine the role of the λ 5'-3' exonuclease complex (Redαβ) in the repair event. Efficient intron homing depended on exon lengths in the 35- to 50-bp range, although homing levels remained significantly elevated above nonbreak-mediated recombination with as little as 10 bp of flanking homology. Although precise intron insertion was demonstrated with extremely limiting exon homology, the complete absence of one exon produced illegitimate events on the side of heterology. Interestingly, intron inheritance was unaffected by the presence of extensive heterology at the double-strand break in wild-type λ, provided that sufficient homology between donor and recipient was present distal to the heterologous sequences. However, these events involving heterologous ends were absolutely dependent on an intact Red exonuclease system. Together these results indicate that heterologous sequences can participate in double-strand break-mediated repair and imply that intron transposition to heteroallelic sites might occur at break sites within regions of limited or no homology. PMID:8807281
A Novel Method of Fabricating Convoluted Shaped Electrode Arrays for Neural and Retinal Prostheses
Bhandari, R.; Negi, S.; Rieth, L.; Normann, R. A.; Solzbacher, F.
2008-01-01
A novel fabrication technique has been developed for creating high density (6.25 electrodes/mm2), out of plane, high aspect ratio silicon-based convoluted microelectrode arrays for neural and retinal prostheses. The convoluted shape of the surface defined by the tips of the electrodes could compliment the curved surfaces of peripheral nerves and the cortex, and in the case of retina, its spherical geometry. The geometry of these electrode arrays has the potential to facilitate implantation in the nerve fascicles and to physically stabilize it against displacement after insertion. This report presents a unique combination of variable depth dicing and wet isotropic etching for the fabrication of a variety of convoluted neural array geometries. Also, a method of deinsulating the electrode tips using photoresist as a mask and the limitations of this technique on uniformity are discussed. PMID:19122774
Weighing classes and streams: toward better methods for two-stream convolutional networks
NASA Astrophysics Data System (ADS)
Kim, Hoseong; Uh, Youngjung; Ko, Seunghyeon; Byun, Hyeran
2016-05-01
The emergence of two-stream convolutional networks has boosted the performance of action recognition by concurrently extracting appearance and motion features from videos. However, most existing approaches simply combine the features by averaging the prediction scores from each recognition stream without realizing that some classes favor greater weight for appearance than motion. We propose a fusion method of two-stream convolutional networks for action recognition by introducing objective functions of weights with two assumptions: (1) the scores from streams do not weigh the same and (2) the weights vary across different classes. We evaluate our method by extensive experiments on UCF101, HMDB51, and Hollywood2 datasets in the context of action recognition. The results show that the proposed approach outperforms the standard two-stream convolutional networks by a large margin (5.7%, 4.8%, and 3.6%) on UCF101, HMDB51, and Hollywood2 datasets, respectively.
Improving Ship Detection with Polarimetric SAR based on Convolution between Co-polarization Channels
Li, Haiyan; He, Yijun; Wang, Wenguang
2009-01-01
The convolution between co-polarization amplitude only data is studied to improve ship detection performance. The different statistical behaviors of ships and surrounding ocean are characterized a by two-dimensional convolution function (2D-CF) between different polarization channels. The convolution value of the ocean decreases relative to initial data, while that of ships increases. Therefore the contrast of ships to ocean is increased. The opposite variation trend of ocean and ships can distinguish the high intensity ocean clutter from ships' signatures. The new criterion can generally avoid mistaken detection by a constant false alarm rate detector. Our new ship detector is compared with other polarimetric approaches, and the results confirm the robustness of the proposed method. PMID:22399964
NASA Astrophysics Data System (ADS)
Shao, Liguo; Xu, Ye; Huang, Guohe
2014-12-01
In this study, an inexact double-sided fuzzy-random-chance-constrained programming (IDSFRCCP) model was developed for supporting air quality management of the Nanshan District of Shenzhen, China, under uncertainty. IDSFRCCP is an integrated model incorporating interval linear programming and double-sided fuzzy-random-chance-constrained programming models. It can express uncertain information as both fuzzy random variables and discrete intervals. The proposed model was solved based on the stochastic and fuzzy chance-constrained programming techniques and an interactive two-step algorithm. The air quality management system of Nanshan District, including one pollutant, six emission sources, six treatment technologies and four receptor sites, was used to demonstrate the applicability of the proposed method. The results indicated that the IDSFRCCP was capable of helping decision makers to analyse trade-offs between system cost and risk of constraint violation. The mid-range solutions tending to lower bounds with moderate αh and qi values were recommended as decision alternatives owing to their robust characteristics.
Fabrizio, Mary C.; Nichols, James D.; Hines, James E.; Swanson, Bruce L.; Schram, Stephen T.
1999-01-01
Data from mark-recapture studies are used to estimate population rates such as exploitation, survival, and growth. Many of these applications assume negligible tag loss, so tag shedding can be a significant problem. Various tag shedding models have been developed for use with data from double-tagging experiments, including models to estimate constant instantaneous rates, time-dependent rates, and type I and II shedding rates. In this study, we used conditional (on recaptures) multinomial models implemented using the program SURVIV (G.C. White. 1983. J. Wildl. Manage. 47: 716-728) to estimate tag shedding rates of lake trout (Salvelinus namaycush) and explore various potential sources of variation in these rates. We applied the models to data from several long-term double-tagging experiments with Lake Superior lake trout and estimated shedding rates for anchor tags in hatchery-reared and wild fish and for various tag types applied in these experiments. Estimates of annual tag retention rates for lake trout were fairly high (80-90%), but we found evidence (among wild fish only) that retention rates may be significantly lower in the first year due to type I losses. Annual retention rates for some tag types varied between male and female fish, but there was no consistent pattern across years. Our estimates of annual tag retention rates will be used in future studies of survival rates for these fish.
NASA Astrophysics Data System (ADS)
Gupta, R. P.; Banerjee, Malay; Chandra, Peeyush
2014-07-01
The present study investigates a prey predator type model for conservation of ecological resources through taxation with nonlinear harvesting. The model uses the harvesting function as proposed by Agnew (1979) [1] which accounts for the handling time of the catch and also the competition between standard vessels being utilized for harvesting of resources. In this paper we consider a three dimensional dynamic effort prey-predator model with Holling type-II functional response. The conditions for uniform persistence of the model have been derived. The existence and stability of bifurcating periodic solution through Hopf bifurcation have been examined for a particular set of parameter value. Using numerical examples it is shown that the system admits periodic, quasi-periodic and chaotic solutions. It is observed that the system exhibits periodic doubling route to chaos with respect to tax. Many forms of complexities such as chaotic bands (including periodic windows, period-doubling bifurcations, period-halving bifurcations and attractor crisis) and chaotic attractors have been observed. Sensitivity analysis is carried out and it is observed that the solutions are highly dependent to the initial conditions. Pontryagin's Maximum Principle has been used to obtain optimal tax policy to maximize the monetary social benefit as well as conservation of the ecosystem.
NASA Astrophysics Data System (ADS)
Ge, Ji; Liu, Hong-Gang; Su, Yong-Bo; Cao, Yu-Xiong; Jin, Zhi
2012-05-01
A physical model for scaling and optimizing InGaAs/InP double heterojunction bipolar transistors (DHBTs) based on hydrodynamic simulation is developed. The model is based on the hydrodynamic equation, which can accurately describe non-equilibrium conditions such as quasi-ballistic transport in the thin base and the velocity overshoot effect in the depleted collector. In addition, the model accounts for several physical effects such as bandgap narrowing, variable effective mass, and doping-dependent mobility at high fields. Good agreement between the measured and simulated values of cutoff frequency, ft, and maximum oscillation frequency, fmax, are achieved for lateral and vertical device scalings. It is shown that the model in this paper is appropriate for downscaling and designing InGaAs/InP DHBTs.
Punctured Parallel and Serial Concatenated Convolutional Codes for BPSK/QPSK Channels
NASA Technical Reports Server (NTRS)
Acikel, Omer Fatih
1999-01-01
As available bandwidth for communication applications becomes scarce, bandwidth-efficient modulation and coding schemes become ever important. Since their discovery in 1993, turbo codes (parallel concatenated convolutional codes) have been the center of the attention in the coding community because of their bit error rate performance near the Shannon limit. Serial concatenated convolutional codes have also been shown to be as powerful as turbo codes. In this dissertation, we introduce algorithms for designing bandwidth-efficient rate r = k/(k + 1),k = 2, 3,..., 16, parallel and rate 3/4, 7/8, and 15/16 serial concatenated convolutional codes via puncturing for BPSK/QPSK (Binary Phase Shift Keying/Quadrature Phase Shift Keying) channels. Both parallel and serial concatenated convolutional codes have initially, steep bit error rate versus signal-to-noise ratio slope (called the -"cliff region"). However, this steep slope changes to a moderate slope with increasing signal-to-noise ratio, where the slope is characterized by the weight spectrum of the code. The region after the cliff region is called the "error rate floor" which dominates the behavior of these codes in moderate to high signal-to-noise ratios. Our goal is to design high rate parallel and serial concatenated convolutional codes while minimizing the error rate floor effect. The design algorithm includes an interleaver enhancement procedure and finds the polynomial sets (only for parallel concatenated convolutional codes) and the puncturing schemes that achieve the lowest bit error rate performance around the floor for the code rates of interest.
Wang, Hainan; Thiele, Alexander; Pilon, Laurent
2013-11-15
This paper presents a generalized modified Poisson–Nernst–Planck (MPNP) model derived from first principles based on excess chemical potential and Langmuir activity coefficient to simulate electric double-layer dynamics in asymmetric electrolytes. The model accounts simultaneously for (1) asymmetric electrolytes with (2) multiple ion species, (3) finite ion sizes, and (4) Stern and diffuse layers along with Ohmic potential drop in the electrode. It was used to simulate cyclic voltammetry (CV) measurements for binary asymmetric electrolytes. The results demonstrated that the current density increased significantly with decreasing ion diameter and/or increasing valency |z_{i}| of either ion species. By contrast, the ion diffusion coefficients affected the CV curves and capacitance only at large scan rates. Dimensional analysis was also performed, and 11 dimensionless numbers were identified to govern the CV measurements of the electric double layer in binary asymmetric electrolytes between two identical planar electrodes of finite thickness. A self-similar behavior was identified for the electric double-layer integral capacitance estimated from CV measurement simulations. Two regimes were identified by comparing the half cycle period τ_{CV} and the “RC time scale” τ_{RC} corresponding to the characteristic time of ions’ electrodiffusion. For τ_{RC} ← τ_{CV}, quasi-equilibrium conditions prevailed and the capacitance was diffusion-independent while for τ_{RC} → τ_{CV}, the capacitance was diffusion-limited. The effect of the electrode was captured by the dimensionless electrode electrical conductivity representing the ratio of characteristic times associated with charge transport in the electrolyte and that in the electrode. The model developed here will be useful for simulating and designing various practical electrochemical, colloidal, and biological systems for a wide range of applications.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.
NASA Astrophysics Data System (ADS)
Zhang, Xue-Guang; Feng, Long-Long
2016-04-01
In this paper, we proposed an interesting method to test the dual supermassive black hole model for active galactic nucleus (AGN) with double-peaked narrow [O III] lines (double-peaked narrow emitters) through their broad optical Balmer line properties. Under the dual supermassive black hole model for double-peaked narrow emitters, we could expect statistically smaller virial black hole masses estimated by observed broad Balmer line properties than true black hole masses (total masses of central two black holes). Then, we compare the virial black hole masses between a sample of 37 double-peaked narrow emitters with broad Balmer lines and samples of Sloan Digital Sky Survey selected normal broad line AGN with single-peaked [O III] lines. However, we can find clearly statistically larger calculated virial black hole masses for the 37 broad line AGN with double-peaked [O III] lines than for samples of normal broad line AGN. Therefore, we give our conclusion that the dual supermassive black hole model is probably not statistically preferred to the double-peaked narrow emitters, and more efforts should be necessary to carefully find candidates for dual supermassive black holes by observed double-peaked narrow emission lines.
A white-box model of S-shaped and double S-shaped single-species population growth.
Kalmykov, Lev V; Kalmykov, Vyacheslav L
2015-01-01
Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka-Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717
A white-box model of S-shaped and double S-shaped single-species population growth
Kalmykov, Lev V.
2015-01-01
Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717
Testing of and model development for double-walled thermal tubular
Satchwell, R.M.; Johnson, L.A. Jr.
1992-08-01
Insulated tubular products have become essential for use in steam injection projects. In a steam injection project, steam is created at the surface by either steam boilers or generators. During this process, steam travels from a boiler through surface lines to the wellhead, down the wellbore to the sandface, and into the reservoir. For some projects to be an economic success, cost must be reduced and oil recoveries must be increased by reducing heat losses in the wellbore. With reduced heats losses, steam generation costs are lowered and higher quality steam can be injected into the formation. To address this need, work under this project consisted of the design and construction of a thermal flow loop, testing a double-walled tubular product that was manufactured by Inter-Mountain Pipe Company, and the development and verification of a thermal hydraulic numerical simulator for steam injection. Four different experimental configurations of the double-walled pipe were tested. These configurations included: (1) bare pipe case, (2) bare pipe case with an applied annular vacuum, (3) insulated annular pipe case, and (4) insulated annular pipe case with an applied annular vacuum. Both the pipe body and coupling were tested with each configuration. The results of the experimental tests showed that the Inter-Mountain Pipe Company double-walled pipe body achieved a 98 percent reduction in heat loss when insulation was applied to the annular portion of the pipe. The application of insulation to the annular portion of the coupling reduced the heat losses by only 6 percent. In tests that specified the use of a vacuum in the annular portion of the pipe, leaks were detected and the vacuum could not be held.
NASA Astrophysics Data System (ADS)
Zhu, Xiaoyong; Quan, Li; Chen, Yunyun; Liu, Guohai; Shen, Yue; Liu, Hui
2012-04-01
The concept of the memory motor is based on the fact that the magnetization level of the AlNiCo permanent magnet in the motor can be regulated by a temporary current pulse and memorized automatically. In this paper, a new type of memory motor is proposed, namely a flux mnemonic double salient motor drive, which is particularly attractive for electric vehicles. To accurately analyze the motor, an improved hysteresis model is employed in the time-stepping finite element method. Both simulation and experimental results are given to verify the validity of the new method.
Sources of DNA Double-Strand Breaks and Models of Recombinational DNA Repair
Mehta, Anuja; Haber, James E.
2014-01-01
DNA is subject to many endogenous and exogenous insults that impair DNA replication and proper chromosome segregation. DNA double-strand breaks (DSBs) are one of the most toxic of these lesions and must be repaired to preserve chromosomal integrity. Eukaryotes are equipped with several different, but related, repair mechanisms involving homologous recombination, including single-strand annealing, gene conversion, and break-induced replication. In this review, we highlight the chief sources of DSBs and crucial requirements for each of these repair processes, as well as the methods to identify and study intermediate steps in DSB repair by homologous recombination. PMID:25104768
NASA Astrophysics Data System (ADS)
Gwaltney, Steven R.; Sherrill, C. David; Head-Gordon, Martin; Krylov, Anna I.
2000-09-01
We present a general perturbative method for correcting a singles and doubles coupled-cluster energy. The coupled-cluster wave function is used to define a similarity-transformed Hamiltonian, which is partitioned into a zeroth-order part that the reference problem solves exactly plus a first-order perturbation. Standard perturbation theory through second-order provides the leading correction. Applied to the valence optimized doubles (VOD) approximation to the full-valence complete active space self-consistent field method, the second-order correction, which we call (2), captures dynamical correlation effects through external single, double, and semi-internal triple and quadruple substitutions. A factorization approximation reduces the cost of the quadruple substitutions to only sixth order in the size of the molecule. A series of numerical tests are presented showing that VOD(2) is stable and well-behaved provided that the VOD reference is also stable. The second-order correction is also general to standard unwindowed coupled-cluster energies such as the coupled-cluster singles and doubles (CCSD) method itself, and the equations presented here fully define the corresponding CCSD(2) energy.
Boore, David M.; Di Alessandro, Carola; Abrahamson, Norman A.
2014-01-01
The stochastic method of simulating ground motions requires the specification of the shape and scaling with magnitude of the source spectrum. The spectral models commonly used are either single-corner-frequency or double-corner-frequency models, but the latter have no flexibility to vary the high-frequency spectral levels for a specified seismic moment. Two generalized double-corner-frequency ω2 source spectral models are introduced, one in which two spectra are multiplied together, and another where they are added. Both models have a low-frequency dependence controlled by the seismic moment, and a high-frequency spectral level controlled by the seismic moment and a stress parameter. A wide range of spectral shapes can be obtained from these generalized spectral models, which makes them suitable for inversions of data to obtain spectral models that can be used in ground-motion simulations in situations where adequate data are not available for purely empirical determinations of ground motions, as in stable continental regions. As an example of the use of the generalized source spectral models, data from up to 40 stations from seven events, plus response spectra at two distances and two magnitudes from recent ground-motion prediction equations, were inverted to obtain the parameters controlling the spectral shapes, as well as a finite-fault factor that is used in point-source, stochastic-method simulations of ground motion. The fits to the data are comparable to or even better than those from finite-fault simulations, even for sites close to large earthquakes.
Rohmer, Thierry; Lang, Christina; Gärtner, Wolfgang; Hughes, Jon; Matysik, Jörg
2010-01-01
Difference patterns of (13)C NMR chemicals shifts for the protonation of a free model compound in organic solution, as reported in the literature (M. Stanek, K. Grubmayr [1998] Chem. Eur. J.4, 1653-1659), were compared with changes in the protonation state occurring during holophytochrome assembly from phycocyanobilin (PCB) and the apoprotein. Both processes induce identical changes in the NMR signals, indicating that the assembly process is linked to protonation of the chromophore, yielding a cationic cofactor in a heterogeneous, quasi-liquid protein environment. The identity of both difference patterns implies that the protonation of a model compound in solution causes a partial stretching of the geometry of the macrocycle as found in the protein. In fact, the similarity of the difference pattern within the bilin family for identical chemical transformations represents a basis for future theoretical analysis. On the other hand, the change of the (13)C NMR chemical shift pattern upon the Pr --> Pfr photoisomerization is very different to that of the free model compound upon ZZZ --> ZZE photoisomerization. Hence, the character of the double-bond isomerization in phytochrome is essentially different from that of a classical photoinduced double-bond isomerization, emphasizing the role of the protein environment in the modulation of this light-induced process. PMID:20492561
Yang, Kuo-Shu
2003-01-01
Maslow's theory of basic human needs is criticized with respect to two of its major aspects, unidimensional linearity and cross-cultural validity. To replace Maslow's linear theory, a revised Y model is proposed on the base of Y. Yu's original Y model. Arranged on the stem of the Y are Maslow's physiological needs (excluding sexual needs) and safety needs. Satisfaction of these needs is indispensable to genetic survival. On the left arm of the Y are interpersonal and belongingness needs, esteem needs, and the self-actualization need. The thoughts and behaviors required for the fulfillment of these needs lead to genetic expression. Lastly, on the right arm of the Y are sexual needs, childbearing needs, and parenting needs. The thoughts and behaviors entailed in the satisfaction of these needs result in genetic transmission. I contend that needs for genetic survival and transmission are universal and that needs for genetic expression are culture-bound. Two major varieties of culture-specific expression needs are distinguished for each of the three levels of needs on the left arm of the Y model. Collectivistic needs for interpersonal affiliation and belongingness, esteem, and self-actualization prevail in collectivist cultures like those found in East Asian countries. Individualistic needs are dominant in individualist cultures like those in North America and certain European nations. I construct two separate Y models, one for people in collectivist cultures and the other for those in individualist ones. In the first (the Yc model), the three levels of expression needs on the left arm are collectivistic in nature, whereas in the second (the Yi model), the three levels of needs on the left arm are individualistic in nature. Various forms of the double-Y model are formulated by conceptually combining the Yc and Yi models at the cross-cultural, crossgroup, and intra-individual levels. Research directions for testing the various aspects of the double-Y model are
A kinetic model of single-strand annealing for the repair of DNA double-strand breaks.
Taleei, Reza; Weinfeld, Michael; Nikjoo, Hooshang
2011-02-01
Ionising radiation induces different types of DNA damage, including single-strand breaks, double-strand breaks (DSB) and base damages. DSB are considered to be the most critical lesion to be repaired. The three main competitive pathways in the repair of DSB are non-homologous end joining (NHEJ), homologous recombination (HR) and single-strand annealing (SSA). SSA is a non-conservative repair pathway requiring direct repeat sequences for the repair process. In this work, a biochemical kinetic model is presented to describe the SSA repair pathway. The model consists of a system of non-linear ordinary differential equations describing the steps in the repair pathway. The reaction rates were estimated by comparing the model results with the experimental data for chicken DT40 cells exposed to 20 Gy of X-rays. The model successfully predicts the repair of the DT40 cells with the reaction rates derived from the 20-Gy X-ray experiment. The experimental data and the kinetic model show fast and slow DSB repair components. The half time and fractions of the slow and the fast components of the repair were compared for the model and the experiments. Mathematical and computational modelling in biology has played an important role in predicting biological mechanisms and stimulating future experimentation. The present model of SSA adds to the modelling of NHEJ and HR to provide a more complete description of DSB repair pathways. PMID:21183536
de Roos, Albert DG
2007-01-01
Background It is generally believed that life first evolved from single-stranded RNA (ssRNA) that both stored genetic information and catalyzed the reactions required for self-replication. Presentation of the hypothesis By modeling early genome evolution on the engineering paradigm design-by-contract, an alternative scenario is presented in which life started with the appearance of double-stranded RNA (dsRNA) as an informational storage molecule while catalytic single-stranded RNA was derived from this dsRNA template later in evolution. Testing the hypothesis It was investigated whether this scenario could be implemented mechanistically by starting with abiotic processes. Double-stranded RNA could be formed abiotically by hybridization of oligoribonucleotides that are subsequently non-enzymatically ligated into a double-stranded chain. Thermal cycling driven by the diurnal temperature cycles could then replicate this dsRNA when strands of dsRNA separate and later rehybridize and ligate to reform dsRNA. A temperature-dependent partial replication of specific regions of dsRNA could produce the first template-based generation of catalytic ssRNA, similar to the developmental gene transcription process. Replacement of these abiotic processes by enzymatic processes would guarantee functional continuity. Further transition from a dsRNA to a dsDNA world could be based on minor mutations in template and substrate recognition sites of an RNA polymerase and would leave all existing processes intact. Implications of the hypothesis Modeling evolution on a design pattern, the 'dsRNA first' hypothesis can provide an alternative mechanistic evolutionary scenario for the origin of our genome that preserves functional continuity. Reviewers This article was reviewed by Anthony Poole, Eugene Koonin and Eugene Shakhnovich PMID:17466073
Sharples, Adam P; Al-Shanti, Nasser; Lewis, Mark P; Stewart, Claire E
2011-12-01
Ageing skeletal muscle displays declines in size, strength, and functional capacity. Given the acknowledged role that the systemic environment plays in reduced regeneration (Conboy et al. [2005] Nature 433: 760-764), the role of resident satellite cells (termed myoblasts upon activation) is relatively dismissed, where, multiple cellular divisions in-vivo throughout the lifespan could also impact on muscular deterioration. Using a model of multiple population doublings (MPD) in-vitro thus provided a system in which to investigate the direct impact of extensive cell duplications on muscle cell behavior. C(2) C(12) mouse skeletal myoblasts (CON) were used fresh or following 58 population doublings (MPD). As a result of multiple divisions, reduced morphological and biochemical (creatine kinase, CK) differentiation were observed. Furthermore, MPD cells had significantly increased cells in the S and decreased cells in the G1 phases of the cell cycle versus CON, following serum withdrawal. These results suggest continued cycling rather than G1 exit and thus reduced differentiation (myotube atrophy) occurs in MPD muscle cells. These changes were underpinned by significant reductions in transcript expression of: IGF-I and myogenic regulatory factors (myoD and myogenin) together with elevated IGFBP5. Signaling studies showed that decreased differentiation in MPD was associated with decreased phosphorylation of Akt, and with later increased phosphorylation of JNK1/2. Chemical inhibition of JNK1/2 (SP600125) in MPD cells increased IGF-I expression (non-significantly), however, did not enhance differentiation. This study provides a potential model and molecular mechanisms for deterioration in differentiation capacity in skeletal muscle cells as a consequence of multiple population doublings that would potentially contribute to the ageing process. PMID:21826704
NASA Astrophysics Data System (ADS)
Zhang, Guo-Bao; Ma, Ruyun
2014-10-01
This paper is concerned with the traveling wave solutions and the spreading speeds for a nonlocal dispersal equation with convolution-type crossing-monostable nonlinearity, which is motivated by an age-structured population model with time delay. We first prove the existence of traveling wave solution with critical wave speed c = c*. By introducing two auxiliary monotone birth functions and using a fluctuation method, we further show that the number c = c* is also the spreading speed of the corresponding initial value problem with compact support. Then, the nonexistence of traveling wave solutions for c < c* is established. Finally, by means of the (technical) weighted energy method, we prove that the traveling wave with large speed is exponentially stable, when the initial perturbation around the wave is relatively small in a weighted norm.
NASA Astrophysics Data System (ADS)
Xu, K. M.; Cheng, A.
2015-12-01
This study examines the sensitivity of water and energy cycle simulated by a super-parameterized Community Atmosphere Model (SPCAM) with an intermediately-prognostic higher-order turbulence closure (IPHOC) to climate perturbations. Sensitivity experiments with doubling CO2 and uniform 2K increase of sea surface temperature (SST) are performed and compared with the control experiment with present-day SST and sea ice distributions. In most respects, the climate sensitivity of SPCAM-IPHOC lie within the typical range of conventionally general circulation models and are comparable to SPCAM without IPHOC except that SPCAM-IPHOC does not simulate strong reductions in the boundary layer cloud over land as SPCAM. The global hydrological cycle is weakened in the doubling CO2 experiment - there is a 2.2% reduction in tropical rainfall and a similar reduction in surface evaporation and liquid water path. On the other hand, the global hydrological cycle is strengthened in the 2K SST increase experiment - there is a 8% increase in tropical rainfall and a similar increase in surface evaporation and liquid water path. A detailed analysis of various aspects of the global water and energy cycle will be presented at meeting.
Kalet, Alan M; Sandison, George A; Phillips, Mark H; Parvathaneni, Upendra
2013-01-01
We evaluate a photon convolution-superposition algorithm used to model a fast neutron therapy beam in a commercial treatment planning system (TPS). The neutron beam modeled was the Clinical Neutron Therapy System (CNTS) fast neutron beam produced by 50 MeV protons on a Be target at our facility, and we implemented the Pinnacle3 dose calculation model for computing neutron doses. Measured neutron data were acquired by an IC30 ion chamber flowing 5 cc/min of tissue equivalent gas. Output factors and profile scans for open and wedged fields were measured according to the Pinnacle physics reference guide recommendations for photon beams in a Wellhofer water tank scanning system. Following the construction of a neutron beam model, computed doses were then generated using 100 monitor units (MUs) beams incident on a water-equivalent phantom for open and wedged square fields, as well as multileaf collimator (MLC)-shaped irregular fields. We compared Pinnacle dose profiles, central axis doses, and off-axis doses (in irregular fields) with 1) doses computed using the Prism treatment planning system, and 2) doses measured in a water phantom and having matching geometry to the computation setup. We found that the Pinnacle photon model may be used to model most of the important dosimetric features of the CNTS fast neutron beam. Pinnacle-calculated dose points among open and wedged square fields exhibit dose differences within 3.9 cGy of both Prism and measured doses along the central axis, and within 5 cGy difference of measurement in the penumbra region. Pinnacle dose point calculations using irregular treatment type fields showed a dose difference up to 9 cGy from measured dose points, although most points of comparison were below 5 cGy. Comparisons of dose points that were chosen from cases planned in both Pinnacle and Prism show an average dose difference less than 0.6%, except in certain fields which incorporate both wedges and heavy blocking of the central axis. All
NASA Technical Reports Server (NTRS)
Hyer, M. W.
1980-01-01
The determination of the stress distribution in the inner lap of double-lap, double-bolt joints using photoelastic models of the joint is discussed. The principal idea is to fabricate the inner lap of a photoelastic material and to use a photoelastically sensitive material for the two outer laps. With this setup, polarized light transmitted through the stressed model responds principally to the stressed inner lap. The model geometry, the procedures for making and testing the model, and test results are described.
Shell-model calculation of neutrinoless double-β decay of 76Ge
NASA Astrophysics Data System (ADS)
Sen'kov, R. A.; Horoi, M.
2016-04-01
In this article we present an extension of our recent Rapid Communication [Phys. Rev. C 90, 051301(R) (2014)], 10.1103/PhysRevC.90.051301 where we calculate the nuclear matrix elements for neutrinoless double-β decay of 76Ge. For the calculations we use a novel method that has perfect convergence properties and allows one to obtain the nonclosure nuclear matrix elements for 76Ge with a 1% accuracy. We present a new way to calculate the optimal closure energy; using this energy with the closure approximation provides the most accurate closure nuclear matrix elements. In addition, we present a new analysis of the heavy-neutrino-exchange nuclear matrix elements, and we compare occupation probabilities and Gamow-Teller strength with experimental data.
Seeking for Spin-Opposite-Scaled Double-Hybrid Models Free of Fitted Parameters.
Alipour, Mojtaba
2016-05-26
On the basis of theoretical arguments, a new exchange-correlation energy expression free of any fitted parameter has been proposed for spin-opposite-scaled double-hybrid density functionals (SOS0-DHs). Employing the recently presented DHs, the working expressions for SOS0-DH functionals are obtained and benchmarked numerically against several standard databases. Our test calculations show that for some cases such as interaction energies and barrier heights the SOS0-DHs without dispersion corrections perform better than their non-SOS counterparts. On the other hand, for other properties like atomization energies, the conventional DHs provide reliable results. We hope that the findings of this work can excite further developments of DH functionals in the framework of SOS scheme for a wide variety of applications resolving the failures at a reasonable computational cost. It seems that a bright future lies ahead in this arena. PMID:27163506
Kinematic modeling of a double octahedral Variable Geometry Truss (VGT) as an extensible gimbal
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1994-01-01
This paper presents the complete forward and inverse kinematics solutions for control of the three degree-of-freedom (DOF) double octahedral variable geometry truss (VGT) module as an extensible gimbal. A VGT is a truss structure partially comprised of linearly actuated members. A VGT can be used as joints in a large, lightweight, high load-bearing manipulator for earth- and space-based remote operations, plus industrial applications. The results have been used to control the NASA VGT hardware as an extensible gimbal, demonstrating the capability of this device to be a joint in a VGT-based manipulator. This work is an integral part of a VGT-based manipulator design, simulation, and control tool.
NASA Astrophysics Data System (ADS)
Bende, Attila; Bogár, Ferenc; Ladik, János
2013-04-01
Using the Hartree-Fock crystal orbital method band structures of poly(G˜-C˜) and poly(A˜-T˜) were calculated (G˜, etc. means a nucleotide) including water molecules and Na+ ions. Due to the close packing of DNA in the ribosomes the motion of the double helix and the water molecules around it are strongly restricted, therefore the band picture can be used. The mobilities were calculated from the highest filled bands. The hole mobilities increase with decreasing temperatures. They are of the same order of magnitude as those of poly(A˜) and poly(T˜). For poly(G˜) the result is ˜5 times larger than in the poly(G˜-C˜) case.
Sabtaji, Agung E-mail: agung.sabtaji@bmkg.go.id; Nugraha, Andri Dian
2015-04-24
West Papua region has fairly high of seismicity activities due to tectonic setting and many inland faults. In addition, the region has a unique and complex tectonic conditions and this situation lead to high potency of seismic hazard in the region. The precise earthquake hypocenter location is very important, which could provide high quality of earthquake parameter information and the subsurface structure in this region to the society. We conducted 1-D P-wave velocity using earthquake data catalog from BMKG for April, 2009 up to March, 2014 around West Papua region. The obtained 1-D seismic velocity then was used as input for improving hypocenter location using double-difference method. The relocated hypocenter location shows fairly clearly the pattern of intraslab earthquake beneath New Guinea Trench (NGT). The relocated hypocenters related to the inland fault are also observed more focus in location around the fault.
Bounds for Neutrinoless Double Beta Decay in SO(10) Inspired Seesaw Models
NASA Astrophysics Data System (ADS)
Buccella, F.; Falcone, D.
By requiring the lower limit for the lightest right-handed neutrino mass, obtained in the baryogenesis from leptogenesis scenario, and a Dirac neutrino mass matrix similar to the up-quark mass matrix, we predict small values for the νe mass and for the matrix element mee responsible of the neutrinoless double beta decay, mνe around 5×10-3 eV and mee smaller than 10-3 eV, respectively. The allowed range for the mass of the heaviest right-handed neutrino is centered around the value of the scale of B-L breaking in the SO(10) gauge theory with Pati-Salam intermediate symmetry.
A generalized recursive convolution method for time-domain propagation in porous media.
Dragna, Didier; Pineau, Pierre; Blanc-Benon, Philippe
2015-08-01
An efficient numerical method, referred to as the auxiliary differential equation (ADE) method, is proposed to compute convolutions between relaxation functions and acoustic variables arising in sound propagation equations in porous media. For this purpose, the relaxation functions are approximated in the frequency domain by rational functions. The time variation of the convolution is thus governed by first-order differential equations which can be straightforwardly solved. The accuracy of the method is first investigated and compared to that of recursive convolution methods. It is shown that, while recursive convolution methods are first or second-order accurate in time, the ADE method does not introduce any additional error. The ADE method is then applied for outdoor sound propagation using the equations proposed by Wilson et al. in the ground [(2007). Appl. Acoust. 68, 173-200]. A first one-dimensional case is performed showing that only five poles are necessary to accurately approximate the relaxation functions for typical applications. Finally, the ADE method is used to compute sound propagation in a three-dimensional geometry over an absorbing ground. Results obtained with Wilson's equations are compared to those obtained with Zwikker and Kosten's equations and with an impedance surface for different flow resistivities. PMID:26328719
Profile of CT scan output dose in axial and helical modes using convolution
NASA Astrophysics Data System (ADS)
Anam, C.; Haryanto, F.; Widita, R.; Arif, I.; Dougherty, G.
2016-03-01
The profile of the CT scan output dose is crucial for establishing the patient dose profile. The purpose of this study is to investigate the profile of the CT scan output dose in both axial and helical modes using convolution. A single scan output dose profile (SSDP) in the center of a head phantom was measured using a solid-state detector. The multiple scan output dose profile (MSDP) in the axial mode was calculated using convolution between SSDP and delta function, whereas for the helical mode MSDP was calculated using convolution between SSDP and the rectangular function. MSDPs were calculated for a number of scans (5, 10, 15, 20 and 25). The multiple scan average dose (MSAD) for differing numbers of scans was compared to the value of CT dose index (CTDI). Finally, the edge values of MSDP for every scan number were compared to the corresponding MSAD values. MSDPs were successfully generated by using convolution between a SSDP and the appropriate function. We found that CTDI only accurately estimates MSAD when the number of scans was more than 10. We also found that the edge values of the profiles were 42% to 93% lower than that the corresponding MSADs.