A Double Precision High Speed Convolution Processor
NASA Astrophysics Data System (ADS)
Larochelle, F.; Coté, J. F.; Malowany, A. S.
1989-11-01
There exist several convolution processors on the market that can process images at video rate. However, none of these processors operates in floating point arithmetic. Unfortunately, many image processing algorithms presently under development are inoperable in integer arithmetic, forcing the researchers to use regular computers. To solve this problem, we designed a specialized convolution processor that operates in double precision floating point arithmetic with a throughput several thousand times faster than the one obtained on regular computer. Its high performance is attributed to a VLSI double precision convolution systolic cell designed in our laboratories. A 9X9 systolic array carries out, in a pipeline manner, every arithmetic operation. The processor is designed to interface directly with the VME Bus. A DMA chip is responsible for bringing the original pixel intensities from the memory of the computer to the systolic array and to return the convolved pixels back to memory. A special use of 8K RAMs allows an inexpensive and efficient way of delaying the pixel intensities in order to supply the right sequence to the systolic array. On board circuitry converts pixel values into floating point representation when the image is originally represented with integer values. An additional systolic cell, used as a pipeline adder at the output of the systolic array, offers the possibility of combining images together which allows a variable convolution window size and color image processing.
Modelling ocean carbon cycle with a nonlinear convolution model
NASA Astrophysics Data System (ADS)
Kheshgi, Haroon S.; White, Benjamin S.
1996-02-01
A nonlinear convolution integral is developed to model the response of the ocean carbon sink to changes in the atmospheric concentration of CO2. This model can accurately represent the atmospheric response of complex ocean carbon cycle models in which the nonlinear behavior stems from the nonlinear dependence of CO2 solubility in seawater on CO2 partial pressure, which is often represented by the buffer factor. The kernel of the nonlinear convolution model can be constructed from a response of such a complex model to an arbitrary change in CO2 emissions, along with the functional dependence of the buffer factor. Once the convolution kernel has been constructed, either analytically or from a model experiment, the convolution representation can be used to estimate responses of the ocean carbon sink to other changes in the atmospheric concentration of CO2. Thus the method can be used, e.g., to explore alternative emissions scenarios for assessments of climate change. A derivation for the nonlinear convolution integral model is given, and the model is used to reproduce the response of two carbon cycle models: a one-dimensional diffusive ocean model, and a three-dimensional ocean-general-circulation tracer model.
A digital model for streamflow routing by convolution methods
Doyle, W.H., Jr.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.
1984-01-01
U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)
Designing the optimal convolution kernel for modeling the motion blur
NASA Astrophysics Data System (ADS)
Jelinek, Jan
2011-06-01
Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.
A new model of the distal convoluted tubule.
Ko, Benjamin; Mistry, Abinash C; Hanson, Lauren; Mallick, Rickta; Cooke, Leslie L; Hack, Bradley K; Cunningham, Patrick; Hoover, Robert S
2012-09-01
The Na(+)-Cl(-) cotransporter (NCC) in the distal convoluted tubule (DCT) of the kidney is a key determinant of Na(+) balance. Disturbances in NCC function are characterized by disordered volume and blood pressure regulation. However, many details concerning the mechanisms of NCC regulation remain controversial or undefined. This is partially due to the lack of a mammalian cell model of the DCT that is amenable to functional assessment of NCC activity. Previously reported investigations of NCC regulation in mammalian cells have either not attempted measurements of NCC function or have required perturbation of the critical without a lysine kinase (WNK)/STE20/SPS-1-related proline/alanine-rich kinase regulatory pathway before functional assessment. Here, we present a new mammalian model of the DCT, the mouse DCT15 (mDCT15) cell line. These cells display native NCC function as measured by thiazide-sensitive, Cl(-)-dependent (22)Na(+) uptake and allow for the separate assessment of NCC surface expression and activity. Knockdown by short interfering RNA confirmed that this function was dependent on NCC protein. Similar to the mammalian DCT, these cells express many of the known regulators of NCC and display significant baseline activity and dimerization of NCC. As described in previous models, NCC activity is inhibited by appropriate concentrations of thiazides, and phorbol esters strongly suppress function. Importantly, they display release of WNK4 inhibition of NCC by small hairpin RNA knockdown. We feel that this new model represents a critical tool for the study of NCC physiology. The work that can be accomplished in such a system represents a significant step forward toward unraveling the complex regulation of NCC.
A staggered-grid convolutional differentiator for elastic wave modelling
NASA Astrophysics Data System (ADS)
Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun
2015-11-01
The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)
NASA Astrophysics Data System (ADS)
Long, A. J.
2009-12-01
Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.
Vehicle detection based on visual saliency and deep sparse convolution hierarchical model
NASA Astrophysics Data System (ADS)
Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long
2016-07-01
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
Vehicle detection based on visual saliency and deep sparse convolution hierarchical model
NASA Astrophysics Data System (ADS)
Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long
2016-06-01
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
A nonlinear convolution model for the evasion of CO2 injected into the deep ocean
NASA Astrophysics Data System (ADS)
Kheshgi, Haroon S.; Archer, David E.
2004-02-01
Deep ocean storage of CO2 captured from, for example, flue gases is being considered as a potential response option to global warming concerns. For storage to be effective, CO2 injected into the deep ocean must remain sequestered from the atmosphere for a long time. However, a fraction of CO2 injected into the deep ocean is expected to eventually evade into the atmosphere. This fraction is expected to depend on the time since injection, the location of injection, and the future atmospheric concentration of CO2. We approximate the evasion of injected CO2 at specific locations using a nonlinear convolution model including explicitly the nonlinear response of CO2 solubility to future CO2 concentration and alkalinity and Green's functions for the transport of CO2 from injection locations to the ocean surface as well as alkalinity response to seafloor CaCO3 dissolution. Green's functions are calculated from the results of a three-dimensional model for ocean carbon cycle for impulses of CO2 either released to the atmosphere or injected a locations deep in the Pacific and Atlantic oceans. CO2 transport in the three-dimensional (3-D) model is governed by offline tracer transport in the ocean interior, exchange of CO2 with the atmosphere, and dissolution of ocean sediments. The convolution model is found to accurately approximate results of the 3-D model in test cases including both deep-ocean injection and sediment dissolution. The convolution model allows comparison of the CO2 evasion delay achieved by deep ocean injection with notional scenarios for CO2 stabilization and the time extent of the fossil fuel era.
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2016-06-01
In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.
Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib
2016-01-01
The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about the health of the auditory nerve. In this study, guinea pigs with various degrees of nerve degeneration were used to directly relate firing properties to nerve histology. The same convolution model was applied on human eCAPs to examine similarities and ultimately to examine its clinical applicability. For most eCAPs, the estimated nerve firing probability was bimodal and could be parameterised by two Gaussian distributions with an average latency difference of 0.4 ms. The ratio of the scaling factors of the late and early component increased with neural degeneration in the guinea pig. This ratio decreased with stimulation intensity in humans. The latency of the early component decreased with neural degeneration in the guinea pig. Indirectly, this was observed in humans as well, assuming that the cochlear base exhibits more neural degeneration than the apex. Differences between guinea pigs and humans were observed, among other parameters, in the width of the early component: very robust in guinea pig, and dependent on stimulation intensity and cochlear region in humans. We conclude that the deconvolution of the eCAP is a valuable addition to existing analyses, in particular as it reveals two separate firing components in the auditory nerve. PMID:27080655
Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib
2016-01-01
The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about the health of the auditory nerve. In this study, guinea pigs with various degrees of nerve degeneration were used to directly relate firing properties to nerve histology. The same convolution model was applied on human eCAPs to examine similarities and ultimately to examine its clinical applicability. For most eCAPs, the estimated nerve firing probability was bimodal and could be parameterised by two Gaussian distributions with an average latency difference of 0.4 ms. The ratio of the scaling factors of the late and early component increased with neural degeneration in the guinea pig. This ratio decreased with stimulation intensity in humans. The latency of the early component decreased with neural degeneration in the guinea pig. Indirectly, this was observed in humans as well, assuming that the cochlear base exhibits more neural degeneration than the apex. Differences between guinea pigs and humans were observed, among other parameters, in the width of the early component: very robust in guinea pig, and dependent on stimulation intensity and cochlear region in humans. We conclude that the deconvolution of the eCAP is a valuable addition to existing analyses, in particular as it reveals two separate firing components in the auditory nerve.
Al Abed, Amr; Yin, Shijie; Suaning, Gregg J; Lovell, Nigel H; Dokos, Socrates
2012-01-01
Computational models are valuable tools that can be used to aid the design and test the efficacy of electrical stimulation strategies in prosthetic vision devices. In continuum models of retinal electrophysiology, the effective extracellular potential can be considered as an approximate measure of the electrotonic loading a neuron's dendritic tree exerts on the soma. A convolution based method is presented to calculate the local spatial average of the effective extracellular loading in retinal ganglion cells (RGCs) in a continuum model of the retina which includes an active RGC tissue layer. The method can be used to study the effect of the dendritic tree size on the activation of RGCs by electrical stimulation using a hexagonal arrangement of electrodes (hexpolar) placed in the suprachoroidal space.
A convolutional code-based sequence analysis model and its application.
Liu, Xiao; Geng, Xiaoli
2013-04-16
A new approach for encoding DNA sequences as input for DNA sequence analysis is proposed using the error correction coding theory of communication engineering. The encoder was designed as a convolutional code model whose generator matrix is designed based on the degeneracy of codons, with a codon treated in the model as an informational unit. The utility of the proposed model was demonstrated through the analysis of twelve prokaryote and nine eukaryote DNA sequences having different GC contents. Distinct differences in code distances were observed near the initiation and termination sites in the open reading frame, which provided a well-regulated characterization of the DNA sequences. Clearly distinguished period-3 features appeared in the coding regions, and the characteristic average code distances of the analyzed sequences were approximately proportional to their GC contents, particularly in the selected prokaryotic organisms, presenting the potential utility as an added taxonomic characteristic for use in studying the relationships of living organisms.
Convolutional modeling of diffraction effects in pulse-echo ultrasound imaging
Mast, T. Douglas
2010-01-01
A model is presented for pulse-echo imaging of three-dimensional, linear, weakly-scattering continuum media by ultrasound array transducers. The model accounts for the diffracted fields of focused array subapertures in both transmit and receive modes, multiple transmit and receive focal zones, frequency-dependent attenuation, and aberration caused by mismatched medium and beamformer sound speeds. For a given medium reflectivity function, computation of a B-scan requires evaluation of a depth-dependent transmit∕receive beam product, followed by two one-dimensional convolutions and a one-dimensional summation. Numerical results obtained using analytic expressions for transmit and receive beams agree favorably with measured B-scan images and speckle statistics. PMID:20815433
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd
2011-01-15
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3
SU-E-T-08: A Convolution Model for Head Scatter Fluence in the Intensity Modulated Field
Chen, M; Mo, X; Chen, Y; Parnell, D; Key, S; Olivera, G; Galmarini, W; Lu, W
2014-06-01
Purpose: To efficiently calculate the head scatter fluence for an arbitrary intensity-modulated field with any source distribution using the source occlusion model. Method: The source occlusion model with focal and extra focal radiation (Jaffray et al, 1993) can be used to account for LINAC head scatter. In the model, the fluence map of any field shape at any point can be calculated via integration of the source distribution within the visible range, as confined by each segment, using the detector eye's view. A 2D integration would be required for each segment and each fluence plane point, which is time-consuming, as an intensity-modulated field contains typically tens to hundreds of segments. In this work, we prove that the superposition of the segmental integrations is equivalent to a simple convolution regardless of what the source distribution is. In fact, for each point, the detector eye's view of the field shape can be represented as a function with the origin defined at the point's pinhole reflection through the center of the collimator plane. We were thus able to reduce hundreds of source plane integration to one convolution. We calculated the fluence map for various 3D and IMRT beams and various extra-focal source distributions using both the segmental integration approach and the convolution approach and compared the computation time and fluence map results of both approaches. Results: The fluence maps calculated using the convolution approach were the same as those calculated using the segmental approach, except for rounding errors (<0.1%). While it took considerably longer time to calculate all segmental integrations, the fluence map calculation using the convolution approach took only ∼1/3 of the time for typical IMRT fields with ∼100 segments. Conclusions: The convolution approach for head scatter fluence calculation is fast and accurate and can be used to enhance the online process.
Analysis of similarity/dissimilarity of DNA sequences based on convolutional code model.
Liu, Xiao; Tian, Feng Chun; Wang, Shi Yuan
2010-02-01
Based on the convolutional code model of error-correction coding theory, we propose an approach to characterize and compare DNA sequences with consideration of the effect of codon context. We construct an 8-component vector whose components are the normalized leading eigenvalues of the L/L and M/M matrices associated with the original DNA sequences and the transformed sequences. The utility of our approach is illustrated by the examination of the similarities/dissimilarities among the coding sequences of the first exon of beta-globin gene of 11 species, and the efficiency of error-correction coding theory in analysis of similarity/dissimilarity of DNA sequences is represented.
Dose convolution filter: Incorporating spatial dose information into tissue response modeling
Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay
2010-03-15
Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.
The Luminous Convolution Model-The light side of dark matter
NASA Astrophysics Data System (ADS)
Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna
2014-03-01
We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network
Li, Na; Yang, Yongjia
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly. PMID:27803711
NASA Astrophysics Data System (ADS)
Crowley, Meagan
2016-03-01
The Luminous Convolution Model maps velocities of galaxies given by data of visible matter with respect to the relative curvature of the emitter and receiver galaxy using five different models of the Milky Way. This model purports that observations made of the luminous profiles of galaxies do not take the relative curvatures of the emitter and receiver galaxies into account, and thus maps the luminous profile onto the curvature using Lorentz transformations, and then back into the flat frame where local observations are made. The five models of the Milky Way used to compile galaxy data are proposed by Klypin:Anatoly (2002) A and B, Xue (2008), Sofue (2013), and a mixture of Xue and Sofue data. The Luminous Convolution Model has been able to accurately describe the rotation of spiral galaxies through this method without the need for dark matter. In each fitting of a given galaxy, the luminous profile graph exhibits a crossing with the graph of the curvature component, suggesting a correlation between the two. This correlation is currently under investigation as being related to phenomena apparent within each galaxy. To determine the correlation between the luminous profile and the curvature component, a functional analysis of the Luminous Convolution Model will be presented
Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng
2016-01-01
This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538
NASA Astrophysics Data System (ADS)
Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng
2016-09-01
This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion.
Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng
2016-01-01
This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538
NASA Astrophysics Data System (ADS)
Starn, J. J.
2013-12-01
Particle tracking often is used to generate particle-age distributions that are used as impulse-response functions in convolution. A typical application is to produce groundwater solute breakthrough curves (BTC) at endpoint receptors such as pumping wells or streams. The commonly used semi-analytical particle-tracking algorithm based on the assumption of linear velocity gradients between opposing cell faces is computationally very fast when used in combination with finite-difference models. However, large gradients near pumping wells in regional-scale groundwater-flow models often are not well represented because of cell-size limitations. This leads to inaccurate velocity fields, especially at weak sinks. Accurate analytical solutions for velocity near a pumping well are available, and various boundary conditions can be imposed using image-well theory. Python can be used to embed these solutions into existing semi-analytical particle-tracking codes, thereby maintaining the integrity and quality-assurance of the existing code. Python (and associated scientific computational packages NumPy, SciPy, and Matplotlib) is an effective tool because of its wide ranging capability. Python text processing allows complex and database-like manipulation of model input and output files, including binary and HDF5 files. High-level functions in the language include ODE solvers to solve first-order particle-location ODEs, Gaussian kernel density estimation to compute smooth particle-age distributions, and convolution. The highly vectorized nature of NumPy arrays and functions minimizes the need for computationally expensive loops. A modular Python code base has been developed to compute BTCs using embedded analytical solutions at pumping wells based on an existing well-documented finite-difference groundwater-flow simulation code (MODFLOW) and a semi-analytical particle-tracking code (MODPATH). The Python code base is tested by comparing BTCs with highly discretized synthetic steady
Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2010-01-01
In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it
Long, A.J.; Putnam, L.D.
2009-01-01
Convolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium (3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.
NASA Astrophysics Data System (ADS)
Long, Andrew J.; Putnam, Larry D.
2009-10-01
SummaryConvolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium ( 3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.
Plasma evolution and dynamics in high-power vacuum-transmission-line post-hole convolutes
NASA Astrophysics Data System (ADS)
Rose, D. V.; Welch, D. R.; Hughes, T. P.; Clark, R. E.; Stygar, W. A.
2008-06-01
Vacuum-post-hole convolutes are used in pulsed high-power generators to join several magnetically insulated transmission lines (MITL) in parallel. Such convolutes add the output currents of the MITLs, and deliver the combined current to a single MITL that, in turn, delivers the current to a load. Magnetic insulation of electron flow, established upstream of the convolute region, is lost at the convolute due to symmetry breaking and the formation of magnetic nulls, resulting in some current losses. At very high-power operating levels and long pulse durations, the expansion of electrode plasmas into the MITL of such devices is considered likely. This work examines the evolution and dynamics of cathode plasmas in the double-post-hole convolutes used on the Z accelerator [R. B. Spielman , Phys. Plasmas 5, 2105 (1998)PHPAEN1070-664X10.1063/1.872881]. Three-dimensional particle-in-cell (PIC) simulations that model the entire radial extent of the Z accelerator convolute—from the parallel-plate transmission-line power feeds to the z-pinch load region—are used to determine electron losses in the convolute. The results of the simulations demonstrate that significant current losses (1.5 MA out of a total system current of 18.5 MA), which are comparable to the losses observed experimentally, could be caused by the expansion of cathode plasmas in the convolute regions.
Ellison, David H.
2014-01-01
The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283
Convolution-variation separation method for efficient modeling of optical lithography.
Liu, Shiyuan; Zhou, Xinjiang; Lv, Wen; Xu, Shuang; Wei, Haiqing
2013-07-01
We propose a general method called convolution-variation separation (CVS) to enable efficient optical imaging calculations without sacrificing accuracy when simulating images for a wide range of process variations. The CVS method is derived from first principles using a series expansion, which consists of a set of predetermined basis functions weighted by a set of predetermined expansion coefficients. The basis functions are independent of the process variations and thus may be computed and stored in advance, while the expansion coefficients depend only on the process variations. Optical image simulations for defocus and aberration variations with applications in robust inverse lithography technology and lens aberration metrology have demonstrated the main concept of the CVS method.
Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-21
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
NASA Astrophysics Data System (ADS)
Xu, Zhigang
2015-12-01
In this study, a new method of storm surge modeling is proposed. This method is orders of magnitude faster than the traditional method within the linear dynamics framework. The tremendous enhancement of the computational efficiency results from the use of a pre-calculated all-source Green's function (ASGF), which connects a point of interest (POI) to the rest of the world ocean. Once the ASGF has been pre-calculated, it can be repeatedly used to quickly produce a time series of a storm surge at the POI. Using the ASGF, storm surge modeling can be simplified as its convolution with an atmospheric forcing field. If the ASGF is prepared with the global ocean as the model domain, the output of the convolution is free of the effects of artificial open-water boundary conditions. Being the first part of this study, this paper presents mathematical derivations from the linearized and depth-averaged shallow-water equations to the ASGF convolution, establishes various auxiliary concepts that will be useful throughout the study, and interprets the meaning of the ASGF from different perspectives. This paves the way for the ASGF convolution to be further developed as a data-assimilative regression model in part II. Five Appendixes provide additional details about the algorithm and the MATLAB functions.
Search for optimal distance spectrum convolutional codes
NASA Technical Reports Server (NTRS)
Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.
1993-01-01
In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.
NONSTATIONARY SPATIAL MODELING OF ENVIRONMENTAL DATA USING A PROCESS CONVOLUTION APPROACH
Traditional approaches to modeling spatial processes involve the specification of the covariance structure of the field. Although such methods are straightforward to understand and effective in some situations, there are often problems in incorporating non-stationarity and in ma...
A Convolutional Subunit Model for Neuronal Responses in Macaque V1
Vintch, Brett; Movshon, J. Anthony
2015-01-01
The response properties of neurons in the early stages of the visual system can be described using the rectified responses of a set of self-similar, spatially shifted linear filters. In macaque primary visual cortex (V1), simple cell responses can be captured with a single filter, whereas complex cells combine a set of filters, creating position invariance. These filters cannot be estimated using standard methods, such as spike-triggered averaging. Subspace methods like spike-triggered covariance can recover multiple filters but require substantial amounts of data, and recover an orthogonal basis for the subspace in which the filters reside, rather than the filters themselves. Here, we assume a linear-nonlinear-linear-nonlinear (LN-LN) cascade model in which the first LN stage consists of shifted (“convolutional”) copies of a single filter, followed by a common instantaneous nonlinearity. We refer to these initial LN elements as the “subunits” of the receptive field, and we allow two independent sets of subunits, each with its own filter and nonlinearity. The second linear stage computes a weighted sum of the subunit responses and passes the result through a final instantaneous nonlinearity. We develop a procedure to directly fit this model to electrophysiological data. When fit to data from macaque V1, the subunit model significantly outperforms three alternatives in terms of cross-validated accuracy and efficiency, and provides a robust, biologically plausible account of receptive field structure for all cell types encountered in V1. SIGNIFICANCE STATEMENT We present a new subunit model for neurons in primary visual cortex that significantly outperforms three alternative models in terms of cross-validated accuracy and efficiency, and provides a robust and biologically plausible account of the receptive field structure in these neurons across the full spectrum of response properties. PMID:26538653
Hunter, Robert W; Ivy, Jessica R; Flatman, Peter W; Kenyon, Christopher J; Craigie, Eilidh; Mullins, Linda J; Bailey, Matthew A; Mullins, John J
2015-07-01
Na(+) transport in the renal distal convoluted tubule (DCT) by the thiazide-sensitive NaCl cotransporter (NCC) is a major determinant of total body Na(+) and BP. NCC-mediated transport is stimulated by aldosterone, the dominant regulator of chronic Na(+) homeostasis, but the mechanism is controversial. Transport may also be affected by epithelial remodeling, which occurs in the DCT in response to chronic perturbations in electrolyte homeostasis. Hsd11b2(-/-) mice, which lack the enzyme 11β-hydroxysteroid dehydrogenase type 2 (11βHSD2) and thus exhibit the syndrome of apparent mineralocorticoid excess, provided an ideal model in which to investigate the potential for DCT hypertrophy to contribute to Na(+) retention in a hypertensive condition. The DCTs of Hsd11b2(-/-) mice exhibited hypertrophy and hyperplasia and the kidneys expressed higher levels of total and phosphorylated NCC compared with those of wild-type mice. However, the striking structural and molecular phenotypes were not associated with an increase in the natriuretic effect of thiazide. In wild-type mice, Hsd11b2 mRNA was detected in some tubule segments expressing Slc12a3, but 11βHSD2 and NCC did not colocalize at the protein level. Thus, the phosphorylation status of NCC may not necessarily equate to its activity in vivo, and the structural remodeling of the DCT in the knockout mouse may not be a direct consequence of aberrant corticosteroid signaling in DCT cells. These observations suggest that the conventional concept of mineralocorticoid signaling in the DCT should be revised to recognize the complexity of NCC regulation by corticosteroids.
An energy fluence-convolution model for amorphous silicon EPID dose prediction
Greer, Peter B.; Cadman, Patrick; Lee, Christopher; Bzdusek, Karl
2009-02-15
In this work, an amorphous silicon electronic portal imaging device (a-Si EPID) dose prediction model based on the energy fluence model of the Pinnacle treatment planning system Version 7 (Philips Medical Systems, Madison, WI) is developed. An energy fluence matrix at very high resolution (<1 mm) is used to incorporate multileaf collimator (MLC) leaf effects in the predicted EPID images. The primary dose deposited in the EPID is calculated from the energy fluence using experimentally derived radially dependent EPID interaction coefficients. Separate coefficients are used for the open beam energy fluence component and the component of the energy fluence transmitted through closed MLC leaves to each EPID pixel. A spatially invariant EPID dose deposition kernel that describes both radiative dose deposition, central axis EPID backscatter, and optical glare is convolved with the primary dose. The kernel is further optimized to give accurate EPID penumbra prediction and EPID scatter factor with changing MLC field size. An EPID calibration method was developed to reduce the effect of nonuniform backscatter from the support arm (E-arm) in a calibrated EPID image. This method removes the backscatter component from the pixel sensitivity (flood field) correction matrix retaining only field-specific backscatter in the images. The model was compared to EPID images for jaw and MLC defined open fields and eight head and neck intensity modulated radiotherapy (IMRT) fields. For the head and neck IMRT fields with 2%, 2 mm criteria 97.6{+-}0.6% (mean {+-}1 standard deviation) of points passed with a gamma index less than 1, and for 3%, 3 mm 99.4{+-}0.4% of points were within the criteria. For these fields, the 2%, 2 mm pass score reduced to 96.0{+-}1.5% when backscatter was present in the pixel sensitivity correction matrix. The model incorporates the effect of MLC leaf transmission, EPID response to open and MLC leakage dose components, and accurately predicts EPID images of IMRT
Hyper-chaos encryption using convolutional masking and model free unmasking
NASA Astrophysics Data System (ADS)
Qi, Guo-Yuan; Sandra Bazebo, Matondo
2014-05-01
In this paper, during the masking process the encrypted message is convolved and embedded into a Qi hyper-chaotic system characterizing a high disorder degree. The masking scheme was tested using both Qi hyper-chaos and Lorenz chaos and indicated that Qi hyper-chaos based masking can resist attacks of the filtering and power spectrum analysis, while the Lorenz based scheme fails for high amplitude data. To unmask the message at the receiving end, two methods are proposed. In the first method, a model-free synchronizer, i.e. a multivariable higher-order differential feedback controller between the transmitter and receiver is employed to de-convolve the message embedded in the receiving signal. In the second method, no synchronization is required since the message is de-convolved using the information of the estimated derivative.
Asymmetric quantum convolutional codes
NASA Astrophysics Data System (ADS)
La Guardia, Giuliano G.
2016-01-01
In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.
Predicting Flow-Induced Vibrations In A Convoluted Hose
NASA Technical Reports Server (NTRS)
Harvey, Stuart A.
1994-01-01
Composite model constructed from two less accurate models. Predicts approximately frequencies and modes of vibrations induced by flows of various fluids in convoluted hose. Based partly on spring-and-lumped-mass representation of dynamics involving springiness and mass of convolution of hose and density of fluid in hose.
On models of double porosity poroelastic media
NASA Astrophysics Data System (ADS)
Boutin, Claude; Royer, Pascale
2015-12-01
This paper focuses on the modelling of fluid-filled poroelastic double porosity media under quasi-static and dynamic regimes. The double porosity model is derived from a two-scale homogenization procedure, by considering a medium locally characterized by blocks of poroelastic Biot microporous matrix and a surrounding system of fluid-filled macropores or fractures. The derived double porosity description is a two-pressure field poroelastic model with memory and viscoelastic effects. These effects result from the `time-dependent' interaction between the pressure fields in the two pore networks. It is shown that this homogenized double porosity behaviour arises when the characteristic time of consolidation in the microporous domain is of the same order of magnitude as the macroscopic characteristic time of transient regime. Conversely, single porosity behaviours occur when both timescales are clearly distinct. Moreover, it is established that the phenomenological approaches that postulate the coexistence of two pressure fields in `instantaneous' interaction only describe media with two pore networks separated by an interface flow barrier. Hence, they fail at predicting and reproducing the behaviour of usual double porosity media. Finally, the results are illustrated for the case of stratified media.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular "graph convolutions", a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph---atoms, bonds, distances, etc.---which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Understanding deep convolutional networks.
Mallat, Stéphane
2016-04-13
Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183
Convolution-deconvolution in DIGES
Philippacopoulos, A.J.; Simos, N.
1995-05-01
Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.
A double pendulum model of tennis strokes
NASA Astrophysics Data System (ADS)
Cross, Rod
2011-05-01
The physics of swinging a tennis racquet is examined by modeling the forearm and the racquet as a double pendulum. We consider differences between a forehand and a serve, and show how they differ from the swing of a bat and a golf club. It is also shown that the swing speed of a racquet, like that of a bat or a club, depends primarily on its moment of inertia rather than on its mass.
Double multiple streamtube model with recent improvements
Paraschivoiu, I.; Delclaux, F.
1983-05-01
The objective is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.
Double multiple streamtube model with recent improvements
NASA Astrophysics Data System (ADS)
Paraschivoiu, I.; Delclaux, F.
1983-06-01
The objective of the present paper is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.
Double multiple streamtube model with recent improvements
Paraschivoiu, I.; Delclaux, F.
1983-05-01
The objective of the present paper is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.
Deep Learning with Hierarchical Convolutional Factor Analysis
Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence
2013-01-01
Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342
Double exchange model for magnetic hexaborides.
Pereira, Vitor M; Lopes dos Santos, J M B; Castro, Eduardo V; Neto, A H Castro
2004-10-01
A microscopic theory for rare-earth ferromagnetic hexaborides, such as Eu1-xCaxB6, is proposed on the basis of the double-exchange Hamiltonian. In these systems, the reduced carrier concentrations place the Fermi level near the mobility edge, introduced in the spectral density by the disordered spin background. We show that the transport properties such as the Hall effect, magnetoresistance, frequency dependent conductivity, and dc resistivity can be quantitatively described within the model. We also make specific predictions for the behavior of the Curie temperature T(C) as a function of the plasma frequency omega(p).
Dealiased convolutions for pseudospectral simulations
NASA Astrophysics Data System (ADS)
Roberts, Malcolm; Bowman, John C.
2011-12-01
Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.
Double Superhelix Model of High Density Lipoprotein*
Wu, Zhiping; Gogonea, Valentin; Lee, Xavier; Wagner, Matthew A.; Li, Xin-Min; Huang, Ying; Undurti, Arundhati; May, Roland P.; Haertlein, Michael; Moulin, Martine; Gutsche, Irina; Zaccai, Giuseppe; DiDonato, Joseph A.; Hazen, Stanley L.
2009-01-01
High density lipoprotein (HDL), the carrier of so-called “good” cholesterol, serves as the major athero-protective lipoprotein and has emerged as a key therapeutic target for cardiovascular disease. We applied small angle neutron scattering (SANS) with contrast variation and selective isotopic deuteration to the study of nascent HDL to obtain the low resolution structure in solution of the overall time-averaged conformation of apolipoprotein AI (apoA-I) versus the lipid (acyl chain) core of the particle. Remarkably, apoA-I is observed to possess an open helical shape that wraps around a central ellipsoidal lipid phase. Using the low resolution SANS shapes of the protein and lipid core as scaffolding, an all-atom computational model for the protein and lipid components of nascent HDL was developed by integrating complementary structural data from hydrogen/deuterium exchange mass spectrometry and previously published constraints from multiple biophysical techniques. Both SANS data and the new computational model, the double superhelix model, suggest an unexpected structural arrangement of protein and lipids of nascent HDL, an anti-parallel double superhelix wrapped around an ellipsoidal lipid phase. The protein and lipid organization in nascent HDL envisages a potential generalized mechanism for lipoprotein biogenesis and remodeling, biological processes critical to sterol and lipid transport, organismal energy metabolism, and innate immunity. PMID:19812036
New optimal quantum convolutional codes
NASA Astrophysics Data System (ADS)
Zhu, Shixin; Wang, Liqi; Kai, Xiaoshan
2015-04-01
One of the most challenges to prove the feasibility of quantum computers is to protect the quantum nature of information. Quantum convolutional codes are aimed at protecting a stream of quantum information in a long distance communication, which are the correct generalization to the quantum domain of their classical analogs. In this paper, we construct some classes of quantum convolutional codes by employing classical constacyclic codes. These codes are optimal in the sense that they attain the Singleton bound for pure convolutional stabilizer codes.
Entanglement-assisted quantum convolutional coding
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Modeling interconnect corners under double patterning misalignment
NASA Astrophysics Data System (ADS)
Hyun, Daijoon; Shin, Youngsoo
2016-03-01
Publisher's Note: This paper, originally published on March 16th, was replaced with a corrected/revised version on March 28th. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Interconnect corners should accurately reflect the effect of misalingment in LELE double patterning process. Misalignment is usually considered separately from interconnect structure variations; this incurs too much pessimism and fails to reflect a large increase in total capacitance for asymmetric interconnect structure. We model interconnect corners by taking account of misalignment in conjunction with interconnect structure variations; we also characterize misalignment effect more accurately by handling metal pitch at both sides of a target metal independently. Identifying metal space at both sides of a target metal.
McCormick, James A; Ellison, David H
2015-01-01
The distal convoluted tubule (DCT) is a short nephron segment, interposed between the macula densa and collecting duct. Even though it is short, it plays a key role in regulating extracellular fluid volume and electrolyte homeostasis. DCT cells are rich in mitochondria, and possess the highest density of Na+/K+-ATPase along the nephron, where it is expressed on the highly amplified basolateral membranes. DCT cells are largely water impermeable, and reabsorb sodium and chloride across the apical membrane via electroneurtral pathways. Prominent among this is the thiazide-sensitive sodium chloride cotransporter, target of widely used diuretic drugs. These cells also play a key role in magnesium reabsorption, which occurs predominantly, via a transient receptor potential channel (TRPM6). Human genetic diseases in which DCT function is perturbed have provided critical insights into the physiological role of the DCT, and how transport is regulated. These include Familial Hyperkalemic Hypertension, the salt-wasting diseases Gitelman syndrome and EAST syndrome, and hereditary hypomagnesemias. The DCT is also established as an important target for the hormones angiotensin II and aldosterone; it also appears to respond to sympathetic-nerve stimulation and changes in plasma potassium. Here, we discuss what is currently known about DCT physiology. Early studies that determined transport rates of ions by the DCT are described, as are the channels and transporters expressed along the DCT with the advent of molecular cloning. Regulation of expression and activity of these channels and transporters is also described; particular emphasis is placed on the contribution of genetic forms of DCT dysregulation to our understanding.
Convolutional Neural Network Based dem Super Resolution
NASA Astrophysics Data System (ADS)
Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang
2016-06-01
DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.
A simple pharmacokinetics subroutine for modeling double peak phenomenon.
Mirfazaelian, Ahmad; Mahmoudian, Massoud
2006-04-01
Double peak absorption has been described with several orally administered drugs. Numerous reasons have been implicated in causing the double peak. DRUG-KNT--a pharmacokinetic software developed previously for fitting one and two compartment kinetics using the iterative curve stripping method--was modified and a revised subroutine was incorporated to solve double-peak models. This subroutine considers the double peak as two hypothetical doses administered with a time gap. The fitting capability of the presented model was verified using four sets of data showing double peak profiles extracted from the literature (piroxicam, ranitidine, phenazopyridine and talinolol). Visual inspection and statistical diagnostics showed that the present algorithm provided adequate curve fit disregarding the mechanism involved in the emergence of the secondary peaks. Statistical diagnostic parameters (RSS, AIC and R2) generally showed good fitness in the plasma profile prediction by this model. It was concluded that the algorithm presented herein provides adequate predicted curves in cases of the double peak phenomenon.
Nonbinary Quantum Convolutional Codes Derived from Negacyclic Codes
NASA Astrophysics Data System (ADS)
Chen, Jianzhang; Li, Jianping; Yang, Fan; Huang, Yuanyuan
2015-01-01
In this paper, some families of nonbinary quantum convolutional codes are constructed by using negacyclic codes. These nonbinary quantum convolutional codes are different from quantum convolutional codes in the literature. Moreover, we construct a family of optimal quantum convolutional codes.
Some easily analyzable convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.
1989-01-01
Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.
Double Higgs boson production in the models with isotriplets
Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.
2015-12-15
The enhancement of double Higgs boson production in the extensions of the Standard Model with extra isotriplets is studied. It is found that in see-saw type II model decays of new heavy Higgs can contribute to the double Higgs production cross section as much as Standard Model channels. In Georgi–Machacek model the cross section can be much larger since the custodial symmetry is preserved and the strongest limitation on triplet parameters is removed.
Convolutional virtual electric field for image segmentation using active contours.
Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden
2014-01-01
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. PMID:25360586
Fast space-varying convolution and its application in stray light reduction
NASA Astrophysics Data System (ADS)
Wei, Jianing; Cao, Guangzhi; Bouman, Charles A.; Allebach, Jan P.
2009-02-01
Space-varying convolution often arises in the modeling or restoration of images captured by optical imaging systems. For example, in applications such as microscopy or photography the distortions introduced by lenses typically vary across the field of view, so accurate restoration also requires the use of space-varying convolution. While space-invariant convolution can be efficiently implemented with the Fast Fourier Transform (FFT), space-varying convolution requires direct implementation of the convolution operation, which can be very computationally expensive when the convolution kernel is large. In this paper, we develop a general approach to the efficient implementation of space-varying convolution through the use of matrix source coding techniques. This method can dramatically reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. This approach leads to a tradeoff between the accuracy and speed of the operation that is closely related to the distortion-rate tradeoff that is commonly made in lossy source coding. We apply our method to the problem of stray light reduction for digital photographs, where convolution with a spatially varying stray light point spread function is required. The experimental results show that our algorithm can achieve a dramatic reduction in computation while achieving high accuracy.
21. INTERIOR, DOUBLE STAIRWAY LEADING TO MODEL HALL, DETAIL OF ...
21. INTERIOR, DOUBLE STAIRWAY LEADING TO MODEL HALL, DETAIL OF ONE FLIGHT (5 x 7 negative; 8 x 10 print) - Patent Office Building, Bounded by Seventh, Ninth, F & G Streets, Northwest, Washington, District of Columbia, DC
Approximating large convolutions in digital images.
Mount, D M; Kanungo, T; Netanyahu, N S; Piatko, C; Silverman, R; Wu, A Y
2001-01-01
Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m x n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. PMID:18255522
A Unimodal Model for Double Observer Distance Sampling Surveys
Becker, Earl F.; Christ, Aaron M.
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied. PMID:26317984
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; et al
2014-12-08
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
NASA Astrophysics Data System (ADS)
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.
2014-12-01
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator's vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator's vacuum-insulator stack (at a radius of 1.6 m) by using standard D -dot and B -dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator's magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z . These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.
2014-12-08
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Deep learning for steganalysis via convolutional neural networks
NASA Astrophysics Data System (ADS)
Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu
2015-03-01
Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.
A review of molecular modelling of electric double layer capacitors.
Burt, Ryan; Birkett, Greg; Zhao, X S
2014-04-14
Electric double-layer capacitors are a family of electrochemical energy storage devices that offer a number of advantages, such as high power density and long cyclability. In recent years, research and development of electric double-layer capacitor technology has been growing rapidly, in response to the increasing demand for energy storage devices from emerging industries, such as hybrid and electric vehicles, renewable energy, and smart grid management. The past few years have witnessed a number of significant research breakthroughs in terms of novel electrodes, new electrolytes, and fabrication of devices, thanks to the discovery of innovative materials (e.g. graphene, carbide-derived carbon, and templated carbon) and the availability of advanced experimental and computational tools. However, some experimental observations could not be clearly understood and interpreted due to limitations of traditional theories, some of which were developed more than one hundred years ago. This has led to significant research efforts in computational simulation and modelling, aimed at developing new theories, or improving the existing ones to help interpret experimental results. This review article provides a summary of research progress in molecular modelling of the physical phenomena taking place in electric double-layer capacitors. An introduction to electric double-layer capacitors and their applications, alongside a brief description of electric double layer theories, is presented first. Second, molecular modelling of ion behaviours of various electrolytes interacting with electrodes under different conditions is reviewed. Finally, key conclusions and outlooks are given. Simulations on comparing electric double-layer structure at planar and porous electrode surfaces under equilibrium conditions have revealed significant structural differences between the two electrode types, and porous electrodes have been shown to store charge more efficiently. Accurate electrolyte and
Real-time rendering of optical effects using spatial convolution
NASA Astrophysics Data System (ADS)
Rokita, Przemyslaw
1998-03-01
Simulation of special effects such as: defocus effect, depth-of-field effect, raindrops or water film falling on the windshield, may be very useful in visual simulators and in all computer graphics applications that need realistic images of outdoor scenery. Those effects are especially important in rendering poor visibility conditions in flight and driving simulators, but can also be applied, for example, in composing computer graphics and video sequences- -i.e. in Augmented Reality systems. This paper proposes a new approach to the rendering of those optical effects by iterative adaptive filtering using spatial convolution. The advantage of this solution is that the adaptive convolution can be done in real-time by existing hardware. Optical effects mentioned above can be introduced into the image computed using conventional camera model by applying to the intensity of each pixel the convolution filter having an appropriate point spread function. The algorithms described in this paper can be easily implemented int the visualization pipeline--the final effect may be obtained by iterative filtering using a single hardware convolution filter or with the pipeline composed of identical 3 X 3 filters placed as the stages of this pipeline. Another advantage of the proposed solution is that the extension based on proposed algorithm can be added to the existing rendering systems as a final stage of the visualization pipeline.
A Digital Synthesis Model of Double-Reed Wind Instruments
NASA Astrophysics Data System (ADS)
Guillemain, Ph.
2004-12-01
We present a real-time synthesis model for double-reed wind instruments based on a nonlinear physical model. One specificity of double-reed instruments, namely, the presence of a confined air jet in the embouchure, for which a physical model has been proposed recently, is included in the synthesis model. The synthesis procedure involves the use of the physical variables via a digital scheme giving the impedance relationship between pressure and flow in the time domain. Comparisons are made between the behavior of the model with and without the confined air jet in the case of a simple cylindrical bore and that of a more realistic bore, the geometry of which is an approximation of an oboe bore.
The Double Homunculus model of self-reflective systems.
Sawa, Koji; Igamberdiev, Abir U
2016-06-01
Vladimir Lefebvre introduced the principles of self-reflective systems and proposed the model to describe consciousness based on these principles (Lefebvre V.A., 1992, J. Math. Psychol. 36, 100-128). The main feature of the model is an assumption of "the image of the self in the image of the self", that is, "a Double Homunculus". In this study, we further formalize the Lefebvre's formulation by using difference equations for the description of self-reflection. In addition, we also implement a dialogue model between the two homunculus agents. The dialogue models show the necessity of both exchange of information and observation of object. We conclude that the Double Homunculus model represents the most adequate description of conscious systems and has a significant potential for describing interactions of reflective agents in the social environment and their ability to perceive the outside world. PMID:27000722
A Simple Double-Source Model for Interference of Capillaries
ERIC Educational Resources Information Center
Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua
2012-01-01
A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10
A double inclusion model for multiphase piezoelectric composites
NASA Astrophysics Data System (ADS)
Lin, Yirong; Sodano, Henry A.
2010-03-01
A novel active structural fiber (ASF; Lin and Sodano 2008 Compos. Sci. Technol. 68 1911-8) was developed that can be embedded in a composite material in order to perform sensing and actuation, in addition to providing load bearing functionality. In order to fully understand the electroelastic properties of the material, this paper will introduce a three-dimensional micromechanics model for estimating the effective electroelastic properties of the multifunctional composites with different design parameters. The three-dimensional model is formulated by extending the double inclusion model to multiphase composites with piezoelectric constituents. The double inclusion model has been chosen for the ASF studied here because it is designed to model composites reinforced by inclusions with multilayer coatings. The accuracy of our extended double inclusion model will be evaluated through a three-dimensional finite element analysis of a representative volume element of the ASF composite. The results will demonstrate that the micromechanics model developed here can very accurately predict the electroelastic properties of the multifunctional composites.
Multilabel Image Annotation Based on Double-Layer PLSA Model
Zhang, Jing; Li, Da; Hu, Weiwei; Chen, Zhihua; Yuan, Yubo
2014-01-01
Due to the semantic gap between visual features and semantic concepts, automatic image annotation has become a difficult issue in computer vision recently. We propose a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA) in this paper. The new double-layer PLSA model is constructed to bridge the low-level visual features and high-level semantic concepts of images for effective image understanding. The low-level features of images are represented as visual words by Bag-of-Words model; latent semantic topics are obtained by the first layer PLSA from two aspects of visual and texture, respectively. Furthermore, we adopt the second layer PLSA to fuse the visual and texture latent semantic topics and achieve a top-layer latent semantic topic. By the double-layer PLSA, the relationships between visual features and semantic concepts of images are established, and we can predict the labels of new images by their low-level features. Experimental results demonstrate that our automatic image annotation model based on double-layer PLSA can achieve promising performance for labeling and outperform previous methods on standard Corel dataset. PMID:24999490
A simple double-source model for interference of capillaries
NASA Astrophysics Data System (ADS)
Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua
2012-01-01
A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An inverse proportionality between the fringes spacing and the capillary radius is derived based on the simple double-source model. This can provide an efficient and precise method to measure a small capillary diameter of micrometre scale. This model could be useful because it presents a fresh perspective on the diffraction of light from a particular geometry (transparent cylinder), which is not straightforward for undergraduates. It also offers an alternative interferometer to perform a different type of measurement, especially for using virtual sources.
Double scaling in tensor models with a quartic interaction
NASA Astrophysics Data System (ADS)
Dartois, Stéphane; Gurau, Razvan; Rivasseau, Vincent
2013-09-01
In this paper we identify and analyze in detail the subleading contributions in the 1 /N expansion of random tensors, in the simple case of a quartically interacting model. The leading order for this 1 /N expansion is made of graphs, called melons, which are dual to particular triangulations of the D-dimensional sphere, closely related to the "stacked" triangulations. For D < 6 the subleading behavior is governed by a larger family of graphs, hereafter called cherry trees, which are also dual to the D-dimensional sphere. They can be resummed explicitly through a double scaling limit. In sharp contrast with random matrix models, this double scaling limit is stable. Apart from its unexpected upper critical dimension 6, it displays a singularity at fixed distance from the origin and is clearly the first step in a richer set of yet to be discovered multi-scaling limits.
Double porosity modeling in elastic wave propagation for reservoir characterization
Berryman, J. G., LLNL
1998-06-01
Phenomenological equations for the poroelastic behavior of a double porosity medium have been formulated and the coefficients in these linear equations identified. The generalization from a single porosity model increases the number of independent coefficients from three to six for an isotropic applied stress. In a quasistatic analysis, the physical interpretations are based upon considerations of extremes in both spatial and temporal scales. The limit of very short times is the one most relevant for wave propagation, and in this case both matrix porosity and fractures behave in an undrained fashion. For the very long times more relevant for reservoir drawdown,the double porosity medium behaves as an equivalent single porosity medium At the macroscopic spatial level, the pertinent parameters (such as the total compressibility) may be determined by appropriate field tests. At the mesoscopic scale pertinent parameters of the rock matrix can be determined directly through laboratory measurements on core, and the compressibility can be measured for a single fracture. We show explicitly how to generalize the quasistatic results to incorporate wave propagation effects and how effects that are usually attributed to squirt flow under partially saturated conditions can be explained alternatively in terms of the double-porosity model. The result is therefore a theory that generalizes, but is completely consistent with, Biot`s theory of poroelasticity and is valid for analysis of elastic wave data from highly fractured reservoirs.
From entanglement renormalisation to the disentanglement of quantum double models
Aguado, Miguel
2011-09-15
We describe how the entanglement renormalisation approach to topological lattice systems leads to a general procedure for treating the whole spectrum of these models in which the Hamiltonian is gradually simplified along a parallel simplification of the connectivity of the lattice. We consider the case of Kitaev's quantum double models, both Abelian and non-Abelian, and we obtain a rederivation of the known map of the toric code to two Ising chains; we pay particular attention to the non-Abelian models and discuss their space of states on the torus. Ultimately, the construction is universal for such models and its essential feature, the lattice simplification, may point towards a renormalisation of the metric in continuum theories. - Highlights: > The toric code is explicitly mapped to two Ising chains and their diagonalisation. > The procedure uses tensor network ideas, notably entanglement renormalisation. > The construction applies to all of Kitaev's non-Abelian quantum double models. > The algebraic structure of non-Abelian models is thoroughly discussed. > The construction is universal and may work on the metric in the continuum limit.
UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.
On the growth and form of cortical convolutions
NASA Astrophysics Data System (ADS)
Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.
2016-06-01
The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.
Generalized double-gradient model of flapping oscillations: Oblique waves
NASA Astrophysics Data System (ADS)
Korovinskiy, D. B.; Kiehas, S. A.
2016-09-01
The double-gradient model of flapping oscillations is generalized for oblique plane waves, propagating in the equatorial plane. It is found that longitudinal propagation (ky = 0) is prohibited, while transversal (kx = 0) or nearly transversal waves should possess a maximum frequency, diminishing with the reduction of | k y / k x | ratio. It turns out that the sausage mode may propagate in a narrow range of directions only, | k y / k x | ≫ 1 . A simple analytical expression for the dispersion relation of the kink mode, valid in most part of wave numbers range, | k y / k x | < 9 , is derived.
Investigating GPDs in the framework of the double distribution model
NASA Astrophysics Data System (ADS)
Nazari, F.; Mirjalili, A.
2016-06-01
In this paper, we construct the generalized parton distribution (GPD) in terms of the kinematical variables x, ξ, t, using the double distribution model. By employing these functions, we could extract some quantities which makes it possible to gain a three-dimensional insight into the nucleon structure function at the parton level. The main objective of GPDs is to combine and generalize the concepts of ordinary parton distributions and form factors. They also provide an exclusive framework to describe the nucleons in terms of quarks and gluons. Here, we first calculate, in the Double Distribution model, the GPD based on the usual parton distributions arising from the GRV and CTEQ phenomenological models. Obtaining quarks and gluons angular momenta from the GPD, we would be able to calculate the scattering observables which are related to spin asymmetries of the produced quarkonium. These quantities are represented by AN and ALS. We also calculate the Pauli and Dirac form factors in deeply virtual Compton scattering. Finally, in order to compare our results with the existing experimental data, we use the difference of the polarized cross-section for an initial longitudinal leptonic beam and unpolarized target particles (ΔσLU). In all cases, our obtained results are in good agreement with the available experimental data.
Three-Triplet Model with Double SU(3) Symmetry
DOE R&D Accomplishments Database
Han, M. Y.; Nambu, Y.
1965-01-01
With a view to avoiding some of the kinematical and dynamical difficulties involved in the single triplet quark model, a model for the low lying baryons and mesons based on three triplets with integral charges is proposed, somewhat similar to the two-triplet model introduced earlier by one of us (Y. N.). It is shown that in a U(3) scheme of triplets with integral charges, one is naturally led to three triplets located symmetrically about the origin of I{sub 3} - Y diagram under the constraint that Nishijima-Gell-Mann relation remains intact. A double SU(3) symmetry scheme is proposed in which the large mass splittings between different representations are ascribed to one of the SU(3), while the other SU(3) is the usual one for the mass splittings within a representation of the first SU(3).
RF compensation of double Langmuir probes: modelling and experiment
NASA Astrophysics Data System (ADS)
Caneses, Juan F.; Blackwell, Boyd
2015-06-01
An analytical model describing the physics of driven floating probes has been developed to model the RF compensation of the double langmuir probe (DLP) technique. The model is based on the theory of the RF self-bias effect as described in Braithwaite’s work [1], which we extend to include time-resolved behaviour. The main contribution of this work is to allow quantitative determination of the intrinsic RF compensation of a DLP in a given RF discharge. Using these ideas, we discuss the design of RF compensated DLPs. Experimental validation for these ideas is presented and the effects of RF rectification on DLP measurements are discussed. Experimental results using RF rectified DLPs indicate that (1) whenever sheath thickness effects are important overestimation of the ion density is proportional to the level of RF rectification and suggest that (2) the electron temperature measurement is only weakly affected.
Astronomical Image Subtraction by Cross-Convolution
NASA Astrophysics Data System (ADS)
Yuan, Fang; Akerlof, Carl W.
2008-04-01
In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.
Colonoscopic polyp detection using convolutional neural networks
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dusty
2016-03-01
Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report
Double-multiple streamtube model for Darrieus in turbines
NASA Technical Reports Server (NTRS)
Paraschivoiu, I.
1981-01-01
An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.
A Double Scattering Analytical Model For Elastic Recoil Detection Analysis
Barradas, N. P.; Lorenz, K.; Alves, E.; Darakchieva, V.
2011-06-01
We present an analytical model for calculation of double scattering in elastic recoil detection measurements. Only events involving the beam particle and the recoil are considered, i.e. 1) an ion scatters off a target element and then produces a recoil, and 2) an ion produces a recoil which then scatters off a target element. Events involving intermediate recoils are not considered, i.e. when the primary ion produces a recoil which then produces a second recoil. If the recoil element is also present in the stopping foil, recoil events in the stopping foil are also calculated. We included the model in the standard code for IBA data analysis NDF, and applied it to the measurement of hydrogen in Si.
Pion double charge exchange in a composite-meson model
Kezerashvili, R. Ya.; Boyko, V. S.
2007-01-15
The pion double charge exchange amplitude is evaluated in a composite-meson model based on the four-quark interaction. The model assumes that the mesons are two-quark systems and can interact with each other only through quark loops. To evaluate the meson exchange current contribution, the form factors of the two-pion decay modes of the {rho},{sigma}, and f{sub 0} mesons have been used in the calculations. The contribution of the four-quark box diagram has been taken into account as well as a contact diagram. The contributions of the {rho},{sigma}, and f{sub 0} mesons increase the forward scattering cross section, which depends weakly on the energy.
A phase mixing model for the frequency-doubling illusion.
Wielaard, James; Smith, R Theodore
2013-10-01
We introduce a temporal phase mixing model for a description of the frequency-doubling illusion (FDI). The model is generic in the sense that it can be set to refer to retinal ganglion cells, lateral geniculate cells, as well as simple cells in the primary visual cortex (V1). Model parameters, however, strongly suggest that the FDI originates in the cortex. The model shows how noise in the response phases of cells in V1, or in further processing of these phases, easily produces observed behavior of FDI onset as a function of spatiotemporal frequencies. It also shows how this noise can accommodate physiologically plausible spatial delays in comparing neural signals over a distance. The model offers an explanation for the disappearance of the FDI at sufficiently high spatial frequencies via increasingly correlated coding of neighboring grating stripes. Further, when the FDI is equated to vanishing perceptual discrimination between asynchronous contrast-reversal gratings, the model proposes the possibility that the FDI shows a resonance behavior at sufficiently high spatial frequencies, by which it is alternately perceived and not perceived in sequential temporal frequency bands.
Convolutional Sparse Coding for Trajectory Reconstruction.
Zhu, Yingying; Lucey, Simon
2015-03-01
Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object's "true" 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an l1 inspired objective for trajectory reconstruction that is able to "adaptively" select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.
Modeling of Sulfate Double-salts in Nuclear Wastes
Toghiani, B.
2000-10-30
Due to limited tank space at Hanford and Savannah River, the liquid nuclear wastes or supernatants have been concentrated in evaporators to remove excess water prior to the hot solutions being transferred to underground storage tanks. As the waste solutions cooled, the salts in the waste exceeded the associated solubility limits and precipitated in the form of saltcakes. The initial step in the remediation of these saltcakes is a rehydration process called saltcake dissolution. At Hanford, dissolution experiments have been conducted on small saltcake samples from five tanks. Modeling of these experimental results, using the Environmental Simulation Program (ESP), are being performed at the Diagnostic Instrumentation and Analysis Laboratory (DIAL) at Mississippi State University. The River Protection Project (RPP) at Hanford will use these experimental and theoretical results to determine the amount of water that will be needed for its dissolution and retrieval operations. A comprehensive effort by the RPP and the Tank Focus Area continues to validate and improve the ESP and its databases for this application. The initial effort focused on the sodium, fluoride, and phosphate system due to its role in the formation of pipeline plugs. In FY 1999, an evaluation of the ESP predictions for sodium fluoride, trisodium phosphate dodecahydrate, and natrophosphate clearly indicated that improvements to the Public database of the ESP were needed. One of the improvements identified was double salts. The inability of any equilibrium thermodynamic model to properly account for double salts in the system can result in errors in the predicted solid-liquid equilibria (SLE) of species in the system. The ESP code is evaluated by comparison with experimental data where possible. However, data does not cover the range of component concentrations and temperatures found in many tank wastes. Therefore, comparison of ESP with another code is desirable, and may illuminate problems with both
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
Convolutions and Their Applications in Information Science.
ERIC Educational Resources Information Center
Rousseau, Ronald
1998-01-01
Presents definitions of convolutions, mathematical operations between sequences or between functions, and gives examples of their use in information science. In particular they can be used to explain the decline in the use of older literature (obsolescence) or the influence of publication delays on the aging of scientific literature. (Author/LRW)
Number-Theoretic Functions via Convolution Rings.
ERIC Educational Resources Information Center
Berberian, S. K.
1992-01-01
Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)
A hybrid double-observer sightability model for aerial surveys
Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine
2013-01-01
Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.
Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi
2016-07-01
Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.
Convolutional neural network architectures for predicting DNA–protein binding
Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.
2016-01-01
Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608
``Quasi-complete'' mechanical model for a double torsion pendulum
NASA Astrophysics Data System (ADS)
De Marchi, Fabrizio; Pucacco, Giuseppe; Bassan, Massimo; De Rosa, Rosario; Di Fiore, Luciano; Garufi, Fabio; Grado, Aniello; Marconi, Lorenzo; Stanga, Ruggero; Stolzi, Francesco; Visco, Massimo
2013-06-01
We present a dynamical model for the double torsion pendulum nicknamed “PETER,” where one torsion pendulum hangs in cascade, but off axis, from the other. The dynamics of interest in these devices lies around the torsional resonance, that is at very low frequencies (mHz). However, we find that, in order to properly describe the forced motion of the pendulums, also other modes must be considered, namely swinging and bouncing oscillations of the two suspended masses, that resonate at higher frequencies (Hz). Although the system has obviously 6+6 degrees of freedom, we find that 8 are sufficient for an accurate description of the observed motion. This model produces reliable estimates of the response to generic external disturbances and actuating forces or torques. In particular, we compute the effect of seismic floor motion (“tilt” noise) on the low frequency part of the signal spectra and show that it properly accounts for most of the measured low frequency noise.
A convolutional neural network neutrino event classifier
Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.
2016-09-01
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
A convolutional neural network neutrino event classifier
NASA Astrophysics Data System (ADS)
Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.
2016-09-01
Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.
A Construction of MDS Quantum Convolutional Codes
NASA Astrophysics Data System (ADS)
Zhang, Guanghui; Chen, Bocong; Li, Liangchen
2015-09-01
In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.
Deep Convolutional Neural Networks for large-scale speech tasks.
Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana
2015-04-01
Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks.
Shell model nuclear matrix elements for competing mechanisms contributing to double beta decay
Horoi, Mihai
2013-12-30
Recent progress in the shell model approach to the nuclear matrix elements for the double beta decay process are presented. This includes nuclear matrix elements for competing mechanisms to neutrionless double beta decay, a comparison between closure and non-closure approximation for {sup 48}Ca, and an updated shell model analysis of nuclear matrix elements for the double beta decay of {sup 136}Xe.
Quantum convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng
2014-12-01
In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.
Multiple deep convolutional neural networks averaging for face alignment
NASA Astrophysics Data System (ADS)
Zhang, Shaohua; Yang, Hua; Yin, Zhouping
2015-05-01
Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.
Applying the Post-Modern Double ABC-X Model to Family Food Insecurity
ERIC Educational Resources Information Center
Hutson, Samantha; Anderson, Melinda; Swafford, Melinda
2015-01-01
This paper develops the argument that using the Double ABC-X model in family and consumer sciences (FCS) curricula is a way to educate nutrition and dietetics students regarding a family's perceptions of food insecurity. The Double ABC-X model incorporates ecological theory as a basis to explain family stress and the resulting adjustment and…
Blind separation of convolutive sEMG mixtures based on independent vector analysis
NASA Astrophysics Data System (ADS)
Wang, Xiaomei; Guo, Yina; Tian, Wenyan
2015-12-01
An independent vector analysis (IVA) method base on variable-step gradient algorithm is proposed in this paper. According to the sEMG physiological properties, the IVA model is applied to the frequency-domain separation of convolutive sEMG mixtures to extract motor unit action potentials information of sEMG signals. The decomposition capability of proposed method is compared to the one of independent component analysis (ICA), and experimental results show the variable-step gradient IVA method outperforms ICA in blind separation of convolutive sEMG mixtures.
Convolutional Neural Network Based Fault Detection for Rotating Machinery
NASA Astrophysics Data System (ADS)
Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie
2016-09-01
Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Applications of convolution voltammetry in electroanalytical chemistry.
Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie
2014-02-18
The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.
Bacterial colony counting by Convolutional Neural Networks.
Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto
2015-01-01
Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.
Convolution neural networks for ship type recognition
NASA Astrophysics Data System (ADS)
Rainey, Katie; Reeder, John D.; Corelli, Alexander G.
2016-05-01
Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline
Description of a quantum convolutional code.
Ollivier, Harold; Tillich, Jean-Pierre
2003-10-24
We describe a quantum error correction scheme aimed at protecting a flow of quantum information over long distance communication. It is largely inspired by the theory of classical convolutional codes which are used in similar circumstances in classical communication. The particular example shown here uses the stabilizer formalism. We provide an explicit encoding circuit and its associated error estimation algorithm. The latter gives the most likely error over any memoryless quantum channel, with a complexity growing only linearly with the number of encoded qubits.
Convolution formulations for non-negative intensity.
Williams, Earl G
2013-08-01
Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space. PMID:23927105
Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Hunter, Craig A.
1999-01-01
An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.
New quantum MDS-convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Li, Fengwei; Yue, Qin
2015-12-01
In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.
A ternary model for double-emulsion formation in a capillary microfluidic device.
Park, Jang Min; Anderson, Patrick D
2012-08-01
To predict double-emulsion formation in a capillary microfluidic device, a ternary diffuse-interface model is presented. The formation of double emulsions involves complex interfacial phenomena of a three-phase fluid system, where each component can have different physical properties. We use the Navier-Stokes/Cahn-Hilliard model for a general ternary system, where the hydrodynamics is coupled with the thermodynamics of the phase field variables. Our model predicts important features of the double-emulsion formation which was observed experimentally by Utada et al. [Utada et al., Science, 2005, 308, 537]. In particular, our model predicts both the dripping and jetting regimes as well as the transition between those two regimes by changing the flow rate conditions. We also demonstrate that a double emulsion having multiple inner drops can be formed when the outer interface is more stable than the inner interface. PMID:22592893
An improvement to computational efficiency of the drain current model for double-gate MOSFET
NASA Astrophysics Data System (ADS)
Zhou, Xing-Ye; Zhang, Jian; Zhou, Zhi-Ze; Zhang, Li-Ning; Ma, Chen-Yue; Wu, Wen; Zhao, Wei; Zhang, Xing
2011-09-01
As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
A 3D Model of Double-Helical DNA Showing Variable Chemical Details
ERIC Educational Resources Information Center
Cady, Susan G.
2005-01-01
Since the first DNA model was created approximately 50 years ago using molecular models, students and teachers have been building simplified DNA models from various practical materials. A 3D double-helical DNA model, made by placing beads on a wire and stringing beads through holes in plastic canvas, is described. Suggestions are given to enhance…
Computational modeling of electrophotonics nanomaterials: Tunneling in double quantum dots
Vlahovic, Branislav Filikhin, Igor
2014-10-06
Single electron localization and tunneling in double quantum dots (DQD) and rings (DQR) and in particular the localized-delocalized states and their spectral distributions are considered in dependence on the geometry of the DQDs (DQRs). The effect of violation of symmetry of DQDs geometry on the tunneling is studied in details. The cases of regular and chaotic geometries are considered. It will be shown that a small violation of symmetry drastically affects localization of electron and that anti-crossing of the levels is the mechanism of tunneling between the localized and delocalized states in DQRs.
Double scaling limit for matrix models with nonanalytic potentials
NASA Astrophysics Data System (ADS)
Shcherbina, Mariya
2008-03-01
We study the double scaling limit for unitary invariant ensembles of random matrices with nonanalytic potentials and find the asymptotic expansion for the entries of the corresponding Jacobi matrix. Our approach is based on the perturbation expansion for the string equations. The first order perturbation terms of the Jacobi matrix coefficients are expressed through the Hastings-McLeod solution of the Painleve II equation. The limiting reproducing kernel is expressed in terms of solutions of the Dirac system of differential equations with a potential defined by the first order terms of the expansion.
Shell-model analysis of the 136Xe double beta decay nuclear matrix elements.
Horoi, M; Brown, B A
2013-05-31
Neutrinoless double beta decay, if observed, could distinguish whether the neutrino is a Dirac or a Majorana particle, and it could be used to determine the absolute scale of the neutrino masses. 136Xe is one of the most promising candidates for observing this rare event. However, until recently there were no positive results for the allowed and less rare two-neutrino double beta decay mode. The small nuclear matrix element associated with the long half-life represents a challenge for nuclear structure models used for its calculation. We report a new shell-model analysis of the two-neutrino double beta decay of 136Xe, which takes into account all relevant nuclear orbitals necessary to fully describe the associated Gamow-Teller strength. We further use the new model to analyze the main contributions to the neutrinoless double beta decay matrix element, and show that they are also diminished.
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
Childhood Epilepsy and Asthma: A Test of an Extension of the Double ABCX Model.
ERIC Educational Resources Information Center
Austin, Joan Kessner
The Double ABCX Model of Family Adjustment and Adaptation, a model that predicts adaptation to chronic stressors on the family, was extended by dividing it into attitudes, coping, and adaptation of parents and child separately, and by including variables relevant to child adaptation to epilepsy or asthma. The extended model was tested on 246…
Modeling and simulation of a double auction artificial financial market
NASA Astrophysics Data System (ADS)
Raberto, Marco; Cincotti, Silvano
2005-09-01
We present a double-auction artificial financial market populated by heterogeneous agents who trade one risky asset in exchange for cash. Agents issue random orders subject to budget constraints. The limit prices of orders may depend on past market volatility. Limit orders are stored in the book whereas market orders give immediate birth to transactions. We show that fat tails and volatility clustering are recovered by means of very simple assumptions. We also investigate two important stylized facts of the limit order book, i.e., the distribution of waiting times between two consecutive transactions and the instantaneous price impact function. We show both theoretically and through simulations that if the order waiting times are exponentially distributed, even trading waiting times are also exponentially distributed.
Chu, Yizhuo; Wang, Dongxing; Zhu, Wenqi; Crozier, Kenneth B
2011-08-01
The strong coupling between localized surface plasmons and surface plasmon polaritons in a double resonance surface enhanced Raman scattering (SERS) substrate is described by a classical coupled oscillator model. The effects of the particle density, the particle size and the SiO2 spacer thickness on the coupling strength are experimentally investigated. We demonstrate that by tuning the geometrical parameters of the double resonance substrate, we can readily control the resonance frequencies and tailor the SERS enhancement spectrum. PMID:21934853
Semileptonic decays of double heavy baryons in a relativistic constituent three-quark model
Faessler, Amand; Gutsche, Thomas; Lyubovitskij, Valery E.; Ivanov, Mikhail A.; Koerner, Juergen G.
2009-08-01
We study the semileptonic decays of double-heavy baryons using a manifestly Lorentz covariant constituent three-quark model. We present complete results on transition form factors between double-heavy baryons for finite values of the heavy quark/baryon masses and in the heavy quark symmetry limit, which is valid at and close to zero recoil. Decay rates are calculated and compared to each other in the full theory, keeping masses finite, and also in the heavy quark limit.
NASA Astrophysics Data System (ADS)
Patel, Ajay M.; Joshi, Anand Y.
2016-10-01
This paper deals with the nonlinear vibration analysis of a double walled carbon nanotube based mass sensor with curvature factor or waviness, which is doubly clamped at a source and a drain. Nonlinear vibrational behaviour of a double-walled carbon nanotube excited harmonically near its primary resonance is considered. The double walled carbon nanotube is harmonically excited by the addition of an excitation force. The modelling involves stretching of the mid plane and damping as per phenomenon. The equation of motion involves four nonlinear terms for inner and outer tubes of DWCNT due to the curved geometry and the stretching of the central plane due to the boundary conditions. The vibrational behaviour of the double walled carbon nanotube with different surface deviations along its axis is analyzed in the context of the time response, Poincaré maps and Fast Fourier Transformation diagrams. The appearance of instability and chaos in the dynamic response is observed as the curvature factor on double walled carbon nanotube is changed. The phenomenon of Periodic doubling and intermittency are observed as the pathway to chaos. The regions of periodic, sub-harmonic and chaotic behaviour are clearly seen to be dependent on added mass and the curvature factors in the double walled carbon nanotube. Poincaré maps and frequency spectra are used to explicate and to demonstrate the miscellany of the system behaviour. With the increase in the curvature factor system excitations increases and results in an increase of the vibration amplitude with reduction in excitation frequency.
Boundary conditions and the generalized metric formulation of the double sigma model
NASA Astrophysics Data System (ADS)
Ma, Chen-Te
2015-09-01
Double sigma model with strong constraints is equivalent to the ordinary sigma model by imposing a self-duality relation. The gauge symmetries are the diffeomorphism and one-form gauge transformation with the strong constraints. We consider boundary conditions in the double sigma model from three ways. The first way is to modify the Dirichlet and Neumann boundary conditions with a fully O (D, D) description from double gauge fields. We perform the one-loop β function for the constant background fields to find low-energy effective theory without using the strong constraints. The low-energy theory can also have O (D, D) invariance as the double sigma model. The second way is to construct different boundary conditions from the projectors. The third way is to combine the antisymmetric background field with field strength to redefine an O (D, D) generalized metric. We use this generalized metric to reconstruct a consistent double sigma model with the classical and quantum equivalence.
Dynamic modelling of a double-pendulum gantry crane system incorporating payload
Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.
2011-06-20
The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.
Dynamic Modelling of a Double-Pendulum Gantry Crane System Incorporating Payload
NASA Astrophysics Data System (ADS)
Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.
2011-06-01
The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.
ERIC Educational Resources Information Center
Jacobs, Paul I.; White, Margaret N.
The present study was undertaken to assess whether training that was known to produce transfer within the Cognition of Figural Relations (CFR) domain of Guilford's Structure-of-Intellect model would also produce transfer to other operations in Guilford's model. Fifty subjects, matched for pretest score on a double classification task, were…
A test of the double-shearing model of flow for granular materials
Savage, J.C.; Lockner, D.A.
1997-01-01
The double-shearing model of flow attributes plastic deformation in granular materials to cooperative slip on conjugate Coulomb shears (surfaces upon which the Coulomb yield condition is satisfied). The strict formulation of the double-shearing model then requires that the slip lines in the material coincide with the Coulomb shears. Three different experiments that approximate simple shear deformation in granular media appear to be inconsistent with this strict formulation. For example, the orientation of the principal stress axes in a layer of sand driven in steady, simple shear was measured subject to the assumption that the Coulomb failure criterion was satisfied on some surfaces (orientation unspecified) within the sand layer. The orientation of the inferred principal compressive axis was then compared with the orientations predicted by the double-shearing model. The strict formulation of the model [Spencer, 1982] predicts that the principal stress axes should rotate in a sense opposite to that inferred from the experiments. A less restrictive formulation of the double-shearing model by de Josselin de Jong [1971] does not completely specify the solution but does prescribe limits on the possible orientations of the principal stress axes. The orientations of the principal compression axis inferred from the experiments are probably within those limits. An elastoplastic formulation of the double-shearing model [de Josselin de Jong, 1988] is reasonably consistent with the experiments, although quantitative agreement was not attained. Thus we conclude that the double-shearing model may be a viable law to describe deformation of granular materials, but the macroscopic slip surfaces will not in general coincide with the Coulomb shears.
Metaheuristic Algorithms for Convolution Neural Network
Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Metaheuristic Algorithms for Convolution Neural Network.
Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Double and single pion photoproduction within a dynamical coupled-channels model
Kamano, H.; Julia-Diaz, B.; Lee, T.-S. H.; Matsuyama, A.; Sato, T.
2009-12-15
Within a dynamical coupled-channels model that has already been fixed by analyzing the data of the {pi}N{yields}{pi}N and {gamma}N{yields}{pi}N reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W<1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects attributable to the direct {gamma}N{yields}{pi}{pi}N mechanism, the interplay between the resonant and nonresonant amplitudes, and the coupled-channels effects. The model parameters that can be determined most effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.
Double and single pion photoproduction within a dynamical coupled-channels model
Hiroyuki Kamano; Julia-Diaz, Bruno; Lee, T. -S. H.; Matsuyama, Akihiko; Sato, Toru
2009-12-16
Within a dynamical coupled-channels model which has already been fixed from analyzing the data of the πN → πN and γN → πN reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W < 1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects due to the direct γN → ππN mechanism, the interplay between the resonant and non-resonant amplitudes, and the coupled-channels effects. As a result, the model parameters which can be determined most effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.
Double and single pion photoproduction within a dynamical coupled-channels model
Hiroyuki Kamano; Julia-Diaz, Bruno; Lee, T. -S. H.; Matsuyama, Akihiko; Sato, Toru
2009-12-16
Within a dynamical coupled-channels model which has already been fixed from analyzing the data of the πN → πN and γN → πN reactions, we present the predicted double pion photoproduction cross sections up to the second resonance region, W < 1.7 GeV. The roles played by the different mechanisms within our model in determining both the single and double pion photoproduction reactions are analyzed, focusing on the effects due to the direct γN → ππN mechanism, the interplay between the resonant and non-resonant amplitudes, and the coupled-channels effects. As a result, the model parameters which can be determined mostmore » effectively in the combined studies of both the single and double pion photoproduction data are identified for future studies.« less
Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition
NASA Astrophysics Data System (ADS)
Popko, E. A.; Weinstein, I. A.
2016-08-01
Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. P.; Dixon, R. L.; Samei, Ehsan
2015-03-01
Among the various metrics that quantify radiation dose in computed tomography (CT), organ dose is one of the most representative quantities reflecting patient-specific radiation burden.1 Accurate estimation of organ dose requires one to effectively model the patient anatomy and the irradiation field. As illustrated in previous studies, the patient anatomy factor can be modeled using a library of computational phantoms with representative body habitus.2 However, the modeling of irradiation field can be practically challenging, especially for CT exams performed with tube current modulation. The central challenge is to effectively quantify the scatter irradiation field created by the dynamic change of tube current. In this study, we present a convolution-based technique to effectively quantify the primary and scatter irradiation field for TCM examinations. The organ dose for a given clinical patient can then be rapidly determined using the convolution-based method, a patient-matching technique, and a library of computational phantoms. 58 adult patients were included in this study (age range: 18-70 y.o., weight range: 60-180 kg). One computational phantom was created based on the clinical images of each patient. Each patient was optimally matched against one of the remaining 57 computational phantoms using a leave-one-out strategy. For each computational phantom, the organ dose coefficients (CTDIvol-normalized organ dose) under fixed tube current were simulated using a validated Monte Carlo simulation program. Such organ dose coefficients were multiplied by a scaling factor, (CTDIvol )organ, convolution that quantifies the regional irradiation field. The convolution-based organ dose was compared with the organ dose simulated from Monte Carlo program with TCM profiles explicitly modeled on the original phantom created based on patient images. The estimation error was within 10% across all organs and modulation profiles for abdominopelvic examination. This strategy
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.
Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian
2016-10-01
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches. PMID:26660697
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.
Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian
2016-10-01
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
A SPICE model of double-sided Si microstrip detectors
Candelori, A.; Paccagnella, A. |; Bonin, F.
1996-12-31
We have developed a SPICE model for the ohmic side of AC-coupled Si microstrip detectors with interstrip isolation via field plates. The interstrip isolation has been measured in various conditions by varying the field plate voltage. Simulations have been compared with experimental data in order to determine the values of the model parameters for different voltages applied to the field plates. The model is able to predict correctly the frequency dependence of the coupling between adjacent strips. Furthermore, we have used such model for the study of the signal propagation along the detector when a current signal is injected in a strip. Only electrical coupling is considered here, without any contribution due to charge sharing derived from carrier diffusion. For this purpose, the AC pads of the strips have been connected to a read-out electronics and the current signal has been injected into a DC pad. Good agreement between measurements and simulations has been reached for the central strip and the first neighbors. Experimental tests and computer simulations have been performed for four different strip and field plate layouts, in order to investigate how the detector geometry affects the parameters of the SPICE model and the signal propagation.
Synthesising Primary Reflections by Marchenko Redatuming and Convolutional Interferometry
NASA Astrophysics Data System (ADS)
Curtis, A.
2015-12-01
Standard active-source seismic processing and imaging steps such as velocity analysis and reverse time migration usually provide best results when all reflected waves in the input data are primaries (waves that reflect only once). Multiples (recorded waves that reflect multiple times) represent a source of coherent noise in data that must be suppressed to avoid imaging artefacts. Consequently, multiple-removal methods have been a primcipal direction of active-source seismic research for decades. We describe a new method to estimate primaries directly, which obviates the need for multiple removal. Primaries are constructed within convolutional interferometry by combining first arriving events of up-going and direct wave down-going Green's functions to virtual receivers in the subsurface. The required up-going wavefields to virtual receivers along discrete subsurface boundaries can be constructed using Marchenko redatuming. Crucially, this is possible without detailed models of the Earth's subsurface velocity structure: similarly to most migration techniques, the method only requires surface reflection data and estimates of direct (non-reflected) arrivals between subsurface sources and the acquisition surface. The method is demonstrated on a stratified synclinal model. It is shown both to improve reverse time migration compared to standard methods, and to be particularly robust against errors in the reference velocity model used.
Accelerating Very Deep Convolutional Networks for Classification and Detection.
Zhang, Xiangyu; Zou, Jianhua; He, Kaiming; Sun, Jian
2016-10-01
This paper aims to accelerate the test-time computation of convolutional neural networks (CNNs), especially very deep CNNs [1] that have substantially impacted the computer vision community. Unlike previous methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We develop an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD). More importantly, while previous methods mainly focus on optimizing one or two layers, our nonlinear method enables an asymmetric reconstruction that reduces the rapidly accumulated error when multiple (e.g., ≥ 10) layers are approximated. For the widely used very deep VGG-16 model [1] , our method achieves a whole-model speedup of 4 × with merely a 0.3 percent increase of top-5 error in ImageNet classification. Our 4 × accelerated VGG-16 model also shows a graceful accuracy degradation for object detection when plugged into the Fast R-CNN detector [2] . PMID:26599615
Double-semion topological order from exactly solvable quantum dimer models
NASA Astrophysics Data System (ADS)
Qi, Yang; Gu, Zheng-Cheng; Yao, Hong
2015-10-01
We construct a generalized quantum dimer model on two-dimensional nonbipartite lattices, including the triangular lattice, the star lattice, and the kagome lattice. At the Rokhsar-Kivelson (RK) point, we obtain its exact ground states that are shown to be a fully gapped quantum spin liquid with the double-semion topological order. The ground-state wave function of such a model at the RK point is a superposition of dimer configurations with a nonlocal sign structure determined by counting the number of loops in the transition graph. We explicitly demonstrate the double-semion topological order in the ground states by showing the semionic statistics of monomer excitations. We also discuss possible implications of such double-semion resonating valence bond states to candidate quantum spin-liquid systems discovered experimentally and numerically in the past few years.
NASA Astrophysics Data System (ADS)
Bhartia, Mini; Chatterjee, Arun Kumar
2015-04-01
A 2D model for the potential distribution in silicon film is derived for a symmetrical double gate MOSFET in weak inversion. This 2D potential distribution model is used to analytically derive an expression for the subthreshold slope and threshold voltage. A drain current model for lightly doped symmetrical DG MOSFETs is then presented by considering weak and strong inversion regions including short channel effects, series source to drain resistance and channel length modulation parameters. These derived models are compared with the simulation results of the SILVACO (Atlas) tool for different channel lengths and silicon film thicknesses. Lastly, the effect of the fixed oxide charge on the drain current model has been studied through simulation. It is observed that the obtained analytical models of symmetrical double gate MOSFETs are in good agreement with the simulated results for a channel length to silicon film thickness ratio greater than or equal to 2.
A double epidemic model for the SARS propagation
Ng, Tuen Wai; Turinici, Gabriel; Danchin, Antoine
2003-01-01
Background An epidemic of a Severe Acute Respiratory Syndrome (SARS) caused by a new coronavirus has spread from the Guangdong province to the rest of China and to the world, with a puzzling contagion behavior. It is important both for predicting the future of the present outbreak and for implementing effective prophylactic measures, to identify the causes of this behavior. Results In this report, we show first that the standard Susceptible-Infected-Removed (SIR) model cannot account for the patterns observed in various regions where the disease spread. We develop a model involving two superimposed epidemics to study the recent spread of the SARS in Hong Kong and in the region. We explore the situation where these epidemics may be caused either by a virus and one or several mutants that changed its tropism, or by two unrelated viruses. This has important consequences for the future: the innocuous epidemic might still be there and generate, from time to time, variants that would have properties similar to those of SARS. Conclusion We find that, in order to reconcile the existing data and the spread of the disease, it is convenient to suggest that a first milder outbreak protected against the SARS. Regions that had not seen the first epidemic, or that were affected simultaneously with the SARS suffered much more, with a very high percentage of persons affected. We also find regions where the data appear to be inconsistent, suggesting that they are incomplete or do not reflect an appropriate identification of SARS patients. Finally, we could, within the framework of the model, fix limits to the future development of the epidemic, allowing us to identify landmarks that may be useful to set up a monitoring system to follow the evolution of the epidemic. The model also suggests that there might exist a SARS precursor in a large reservoir, prompting for implementation of precautionary measures when the weather cools down. PMID:12964944
NASA Astrophysics Data System (ADS)
Yan-hui, Xin; Sheng, Yuan; Ming-tang, Liu; Hong-xia, Liu; He-cai, Yuan
2016-03-01
The two-dimensional models for symmetrical double-material double-gate (DM-DG) strained Si (s-Si) metal-oxide semiconductor field effect transistors (MOSFETs) are presented. The surface potential and the surface electric field expressions have been obtained by solving Poisson’s equation. The models of threshold voltage and subthreshold current are obtained based on the surface potential expression. The surface potential and the surface electric field are compared with those of single-material double-gate (SM-DG) MOSFETs. The effects of different device parameters on the threshold voltage and the subthreshold current are demonstrated. The analytical models give deep insight into the device parameters design. The analytical results obtained from the proposed models show good matching with the simulation results using DESSIS. Project supported by the National Natural Science Foundation of China (Grant Nos. 61376099, 11235008, and 61205003).
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives.
Convolution-based estimation of organ dose in tube current modulated CT
Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan
2016-01-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients (hOrgan) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate (CTDIvol)organ, convolution values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying (CTDIvol)organ, convolution with the organ dose coefficients (hOrgan). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The
The frictional flow of a dense granular material based on the dilatant double shearing model
Zhu, H.; Mehrabadi, M.M.; Massoudi, M.C.
2007-01-01
Slow flow of granular materials, which typically occurs during the emptying of industrial storage hoppers and bins, has great industrial relevance. In the present study, we have employed our newly developed dilatant double shearing model [H. Zhu, M.M. Mehrabadi, M. Massoudi, Incorporating the effects of fabric in the dilatant double shearing model for granular materials, Int. J. Plast. 22 (2006) 628-653] to study the slow flow of a frictional, dense granular material. Although most models pertain only to the fully developed granular flow, the application of the dilatant double shearing model is shown to be valid from the onset of granular flow to the fully developed granular flow. In this paper, we use the finite element program ABAQUS/Explicit to numerically simulate the granular Couette flow and the frictional granular flow in a silo. For the granular Couette flow, the relative density variation and the velocity profile obtained by using the dilatant double shearing model are in good quantitative agreement with those obtained from a DEM simulation. For the frictional flow in a silo, the major principal stress directions are obtained at various time steps after the onset of silo discharge. We find that, in the hopper zone, the arching of the granular material between the sloping hopper walls is clearly demonstrated by the change in direction of the major principal stress. We also compare the pressure distribution along the wall before and after the onset of silo discharge. The numerical results show that the dilatant double shearing model is capable of capturing the essential features of the frictional granular flow.
Creating a Double-Spring Model to Teach Chromosome Movement during Mitosis & Meiosis
ERIC Educational Resources Information Center
Luo, Peigao
2012-01-01
The comprehension of chromosome movement during mitosis and meiosis is essential for understanding genetic transmission, but students often find this process difficult to grasp in a classroom setting. I propose a "double-spring model" that incorporates a physical demonstration and can be used as a teaching tool to help students understand this…
Toward Understanding Stress in Ministers' Families: An Application of the Double ABCX Model.
ERIC Educational Resources Information Center
Ostrander, Diane L.; Henry, Carolyn S.
Recent literature indicates that ministers' families face not only the normative developmental stressors of other families, but an additional set of stressors created by the interface between the family and the church systems. Based upon the Double ABCX model of family stress, particular ministers' families will vary in their ability to adapt to…
Family Stress and Adaptation to Crises: A Double ABCX Model of Family Behavior.
ERIC Educational Resources Information Center
McCubbin, Hamilton I.; Patterson, Joan M.
Recent developments in family stress and coping research and a review of data and observations of families in a war-induced crisis situation led to an investigation of the relationship between a stressor and family outcomes. The study, based on the Double ABCX Model in which A (the stressor event) interacts with B (the family's crisis-meeting…
Double Higgs production in the Two Higgs Doublet Model at the linear collider
Arhrib, Abdesslam; Benbrik, Rachid; Chiang, C.-W.
2008-04-21
We study double Higgs-strahlung production at the future Linear Collider in the framework of the Two Higgs Doublet Models through the following channels: e{sup +}e{sup -}{yields}{phi}{sub i}{phi}{sub j}Z, {phi}{sub i} = h deg., H deg., A deg. All these processes are sensitive to triple Higgs couplings. Hence observations of them provide information on the triple Higgs couplings that help reconstructing the scalar potential. We discuss also the double Higgs-strahlung e{sup +}e{sup -}{yields}h deg. h deg. Z in the decoupling limit where h deg. mimics the SM Higgs boson.
Ergodic transition in a simple model of the continuous double auction.
Radivojević, Tijana; Anselmi, Jonatha; Scalas, Enrico
2014-01-01
We study a phenomenological model for the continuous double auction, whose aggregate order process is equivalent to two independent M/M/1 queues. The continuous double auction defines a continuous-time random walk for trade prices. The conditions for ergodicity of the auction are derived and, as a consequence, three possible regimes in the behavior of prices and logarithmic returns are observed. In the ergodic regime, prices are unstable and one can observe a heteroskedastic behavior in the logarithmic returns. On the contrary, non-ergodicity triggers stability of prices, even if two different regimes can be seen.
Ergodic Transition in a Simple Model of the Continuous Double Auction
Radivojević, Tijana; Anselmi, Jonatha; Scalas, Enrico
2014-01-01
We study a phenomenological model for the continuous double auction, whose aggregate order process is equivalent to two independent queues. The continuous double auction defines a continuous-time random walk for trade prices. The conditions for ergodicity of the auction are derived and, as a consequence, three possible regimes in the behavior of prices and logarithmic returns are observed. In the ergodic regime, prices are unstable and one can observe a heteroskedastic behavior in the logarithmic returns. On the contrary, non-ergodicity triggers stability of prices, even if two different regimes can be seen. PMID:24558377
SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters.
Hui, Kerwin; Chai, Jeng-Da
2016-01-28
By incorporating the nonempirical strongly constrained and appropriately normed (SCAN) semilocal density functional [J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction problems and noncovalent interactions. In particular, SCAN0-2, which includes about 79% of Hartree-Fock exchange and 50% of second-order Møller-Plesset correlation, is shown to be reliably accurate for a very diverse range of applications, such as thermochemistry, kinetics, noncovalent interactions, and self-interaction problems. PMID:26827209
Modelling and control of double-cone dielectric elastomer actuator
NASA Astrophysics Data System (ADS)
Branz, F.; Francesconi, A.
2016-09-01
Among various dielectric elastomer devices, cone actuators are of large interest for their multi-degree-of-freedom design. These objects combine the common advantages of dielectric elastomers (i.e. solid-state actuation, self-sensing capability, high conversion efficiency, light weight and low cost) with the possibility to actuate more than one degree of freedom in a single device. The potential applications of this feature in robotics are huge, making cone actuators very attractive. This work focuses on rotational degrees of freedom to complete existing literature and improve the understanding of such aspect. Simple tools are presented for the performance prediction of the device: finite element method simulations and interpolating relations have been used to assess the actuator steady-state behaviour in terms of torque and rotation as a function of geometric parameters. Results are interpolated by fit relations accounting for all the relevant parameters. The obtained data are validated through comparison with experimental results: steady-state torque and rotation are determined at a given high voltage actuation. In addition, the transient response to step input has been measured and, as a result, the voltage-to-torque and the voltage-to-rotation transfer functions are obtained. Experimental data are collected and used to validate the prediction capability of the transfer function in terms of time response to step input and frequency response. The developed static and dynamic models have been employed to implement a feedback compensator that controls the device motion; the simulated behaviour is compared to experimental data, resulting in a maximum prediction error of 7.5%.
A diabatic state model for double proton transfer in hydrogen bonded complexes
McKenzie, Ross H.
2014-09-14
Four diabatic states are used to construct a simple model for double proton transfer in hydrogen bonded complexes. Key parameters in the model are the proton donor-acceptor separation R and the ratio, D{sub 1}/D{sub 2}, between the proton affinity of a donor with one and two protons. Depending on the values of these two parameters the model describes four qualitatively different ground state potential energy surfaces, having zero, one, two, or four saddle points. Only for the latter are there four stable tautomers. In the limit D{sub 2} = D{sub 1} the model reduces to two decoupled hydrogen bonds. As R decreases a transition can occur from a synchronous concerted to an asynchronous concerted to a sequential mechanism for double proton transfer.
Neutrinoless double beta decay in type I+II seesaw models
NASA Astrophysics Data System (ADS)
Borah, Debasish; Dasgupta, Arnab
2015-11-01
We study neutrinoless double beta decay in left-right symmetric extension of the standard model with type I and type II seesaw origin of neutrino masses. Due to the enhanced gauge symmetry as well as extended scalar sector, there are several new physics sources of neutrinoless double beta decay in this model. Ignoring the left-right gauge boson mixing and heavy-light neutrino mixing, we first compute the contributions to neutrinoless double beta decay for type I and type II dominant seesaw separately and compare with the standard light neutrino contributions. We then repeat the exercise by considering the presence of both type I and type II seesaw, having non-negligible contributions to light neutrino masses and show the difference in results from individual seesaw cases. Assuming the new gauge bosons and scalars to be around a TeV, we constrain different parameters of the model including both heavy and light neutrino masses from the requirement of keeping the new physics contribution to neutrinoless double beta decay amplitude below the upper limit set by the GERDA experiment and also satisfying bounds from lepton flavor violation, cosmology and colliders.
Innervation of the renal proximal convoluted tubule of the rat
Barajas, L.; Powers, K. )
1989-12-01
Experimental data suggest the proximal tubule as a major site of neurogenic influence on tubular function. The functional and anatomical axial heterogeneity of the proximal tubule prompted this study of the distribution of innervation sites along the early, mid, and late proximal convoluted tubule (PCT) of the rat. Serial section autoradiograms, with tritiated norepinephrine serving as a marker for monoaminergic nerves, were used in this study. Freehand clay models and graphic reconstructions of proximal tubules permitted a rough estimation of the location of the innervation sites along the PCT. In the subcapsular nephrons, the early PCT (first third) was devoid of innervation sites with most of the innervation occurring in the mid (middle third) and in the late (last third) PCT. Innervation sites were found in the early PCT in nephrons located deeper in the cortex. In juxtamedullary nephrons, innervation sites could be observed on the PCT as it left the glomerulus. This gradient of PCT innervation can be explained by the different tubulovascular relationships of nephrons at different levels of the cortex. The absence of innervation sites in the early PCT of subcapsular nephrons suggests that any influence of the renal nerves on the early PCT might be due to an effect of neurotransmitter released from renal nerves reaching the early PCT via the interstitium and/or capillaries.
Fast convolution-superposition dose calculation on graphics hardware.
Hissoiny, Sami; Ozell, Benoît; Després, Philippe
2009-06-01
The numerical calculation of dose is central to treatment planning in radiation therapy and is at the core of optimization strategies for modern delivery techniques. In a clinical environment, dose calculation algorithms are required to be accurate and fast. The accuracy is typically achieved through the integration of patient-specific data and extensive beam modeling, which generally results in slower algorithms. In order to alleviate execution speed problems, the authors have implemented a modern dose calculation algorithm on a massively parallel hardware architecture. More specifically, they have implemented a convolution-superposition photon beam dose calculation algorithm on a commodity graphics processing unit (GPU). They have investigated a simple porting scenario as well as slightly more complex GPU optimization strategies. They have achieved speed improvement factors ranging from 10 to 20 times with GPU implementations compared to central processing unit (CPU) implementations, with higher values corresponding to larger kernel and calculation grid sizes. In all cases, they preserved the numerical accuracy of the GPU calculations with respect to the CPU calculations. These results show that streaming architectures such as GPUs can significantly accelerate dose calculation algorithms and let envision benefits for numerically intensive processes such as optimizing strategies, in particular, for complex delivery techniques such as IMRT and are therapy.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Predicting Semantic Descriptions from Medical Images with Convolutional Neural Networks.
Schlegl, Thomas; Waldstein, Sebastian M; Vogl, Wolf-Dieter; Schmidt-Erfurth, Ursula; Langs, Georg
2015-01-01
Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and corresponding reports generated during clinical routine. Weakly supervised learning approaches can link volume-level labels to image content but suffer from the typical label distributions in medical imaging data where only a small part consists of clinically relevant abnormal structures. In this paper we propose to use a semantic representation of clinical reports as a learning target that is predicted from imaging data by a convolutional neural network. We demonstrate how we can learn accurate voxel-level classifiers based on weak volume-level semantic descriptions on a set of 157 optical coherence tomography (OCT) volumes. We specifically show how semantic information increases classification accuracy for intraretinal cystoid fluid (IRC), subretinal fluid (SRF) and normal retinal tissue, and how the learning algorithm links semantic concepts to image content and geometry.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
A quantum algorithm for Viterbi decoding of classical convolutional codes
NASA Astrophysics Data System (ADS)
Grice, Jon R.; Meyer, David A.
2015-07-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.
Double Higgs production at LHC, see-saw type-II and Georgi-Machacek model
Godunov, S. I. Vysotsky, M. I. Zhemchugov, E. V.
2015-03-15
The double Higgs production in the models with isospin-triplet scalars is studied. It is shown that in the see-saw type-II model, the mode with an intermediate heavy scalar, pp → H + X → 2h + X, may have the cross section that is comparable with that in the Standard Model. In the Georgi-Machacek model, this cross section could be much larger than in the Standard Model because the vacuum expectation value of the triplet can be large.
Evaluation of convolutional neural networks for visual recognition.
Nebauer, C
1998-01-01
Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed.
Explicit drain current model of junctionless double-gate field-effect transistors
NASA Astrophysics Data System (ADS)
Yesayan, Ashkhen; Prégaldiny, Fabien; Sallese, Jean-Michel
2013-11-01
This paper presents an explicit drain current model for the junctionless double-gate metal-oxide-semiconductor field-effect transistor. Analytical relationships for the channel charge densities and for the drain current are derived as explicit functions of applied terminal voltages and structural parameters. The model is validated with 2D numerical simulations for a large range of channel thicknesses and is found to be very accurate for doping densities exceeding 1018 cm-3, which are actually used for such devices.
Parallel double-plate capacitive proximity sensor modelling based on effective theory
Li, Nan Zhu, Haiye; Wang, Wenyu; Gong, Yu
2014-02-15
A semi-analytical model for a double-plate capacitive proximity sensor is presented according to the effective theory. Three physical models are established to derive the final equation of the sensor. Measured data are used to determine the coefficients. The final equation is verified by using measured data. The average relative error of the calculated and the measured sensor capacitance is less than 7.5%. The equation can be used to provide guidance to engineering design of the proximity sensors.
Tonkin, J.W.; Balistrieri, L.S.; Murray, J.W.
2004-01-01
Manganese oxides are important scavengers of trace metals and other contaminants in the environment. The inclusion of Mn oxides in predictive models, however, has been difficult due to the lack of a comprehensive set of sorption reactions consistent with a given surface complexation model (SCM), and the discrepancies between published sorption data and predictions using the available models. The authors have compiled a set of surface complexation reactions for synthetic hydrous Mn oxide (HMO) using a two surface site model and the diffuse double layer SCM which complements databases developed for hydrous Fe (III) oxide, goethite and crystalline Al oxide. This compilation encompasses a range of data observed in the literature for the complex HMO surface and provides an error envelope for predictions not well defined by fitting parameters for single or limited data sets. Data describing surface characteristics and cation sorption were compiled from the literature for the synthetic HMO phases birnessite, vernadite and ??-MnO2. A specific surface area of 746 m2g-1 and a surface site density of 2.1 mmol g-1 were determined from crystallographic data and considered fixed parameters in the model. Potentiometric titration data sets were adjusted to a pH1EP value of 2.2. Two site types (???XOH and ???YOH) were used. The fraction of total sites attributed to ???XOH (??) and pKa2 were optimized for each of 7 published potentiometric titration data sets using the computer program FITEQL3.2. pKa2 values of 2.35??0.077 (???XOH) and 6.06??0.040 (???YOH) were determined at the 95% confidence level. The calculated average ?? value was 0.64, with high and low values ranging from 1.0 to 0.24, respectively. pKa2 and ?? values and published cation sorption data were used subsequently to determine equilibrium surface complexation constants for Ba2+, Ca2+, Cd 2+, Co2+, Cu2+, Mg2+, Mn 2+, Ni2+, Pb2+, Sr2+ and Zn 2+. In addition, average model parameters were used to predict additional
NASA Astrophysics Data System (ADS)
Bakry, A.; Abdulrhmann, S.; Ahmed, M.
2016-06-01
We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Glaucoma detection based on deep convolutional neural network.
Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu
2015-08-01
Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection. PMID:26736362
Glaucoma detection based on deep convolutional neural network.
Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu
2015-08-01
Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection.
Two dimensional convolute integers for machine vision and image recognition
NASA Technical Reports Server (NTRS)
Edwards, Thomas R.
1988-01-01
Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.
Geomechanical Analysis with Rigorous Error Estimates for a Double-Porosity Reservoir Model
Berryman, J G
2005-04-11
A model of random polycrystals of porous laminates is introduced to provide a means for studying geomechanical properties of double-porosity reservoirs. Calculations on the resulting earth reservoir model can proceed semi-analytically for studies of either the poroelastic or transport coefficients. Rigorous bounds of the Hashin-Shtrikman type provide estimates of overall bulk and shear moduli, and thereby also provide rigorous error estimates for geomechanical constants obtained from up-scaling based on a self-consistent effective medium method. The influence of hidden (or presumed unknown) microstructure on the final results can then be evaluated quantitatively. Detailed descriptions of the use of the model and some numerical examples showing typical results for the double-porosity poroelastic coefficients of a heterogeneous reservoir are presented.
Predictive double-layer modeling of metal sorption in mine-drainage systems
Smith, K.S.; Plumlee, G.S.; Ranville, J.F.; Macalady, D.L.
1996-10-01
Previous comparison of predictive double-layer modeling and empirically derived metal-partitioning data has validated the use of the double-layer model to predict metal sorption reactions in iron-rich mine-drainage systems. The double-layer model subsequently has been used to model data collected from several mine-drainage sites in Colorado with diverse geochemistry and geology. This work demonstrates that metal partitioning between dissolved and sediment phases can be predictively modeled simply by knowing the water chemistry and the amount of suspended iron-rich particulates present in the system. Sorption on such iron-rich suspended sediments appears to control metal and arsenic partitioning between dissolved and sediment phases, with sorption on bed sediment playing a limited role. At pH > 5, Pb and As are largely sorbed by iron-rich suspended sediments and Cu is partially sorbed; Zn, Cd, and Ni usually remain dissolved throughout the pH range of 3 to 8.
Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel
NASA Technical Reports Server (NTRS)
Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.
1989-01-01
A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.
Zhu, H.; Mehrabadi, M.; Massoudi, M.
2007-04-25
In this paper, we consider the mechanical response of granular materials and compare the predictions of a hypoplastic model with that of a recently developed dilatant double shearing model which includes the effects of fabric. We implement the constitutive relations of the dilatant double shearing model and the hypoplastic model in the finite element program ABACUS/Explicit and compare their predictions in the triaxial compression and cyclic shear loading tests. Although the origins and the constitutive relations of the double shearing model and the hypoplastic model are quite different, we find that both models are capable of capturing typical behaviours of granular materials. This is significant because while hypoplasticity is phenomenological in nature, the double shearing model is based on a kinematic hypothesis and microstructural considerations, and can easily be calibrated through standard tests.
Geodesic acoustic mode in anisotropic plasmas using double adiabatic model and gyro-kinetic equation
Ren, Haijun; Cao, Jintao
2014-12-15
Geodesic acoustic mode in anisotropic tokamak plasmas is theoretically analyzed by using double adiabatic model and gyro-kinetic equation. The bi-Maxwellian distribution function for guiding-center ions is assumed to obtain a self-consistent form, yielding pressures satisfying the magnetohydrodynamic (MHD) anisotropic equilibrium condition. The double adiabatic model gives the dispersion relation of geodesic acoustic mode (GAM), which agrees well with the one derived from gyro-kinetic equation. The GAM frequency increases with the ratio of pressures, p{sub ⊥}/p{sub ∥}, and the Landau damping rate is dramatically decreased by p{sub ⊥}/p{sub ∥}. MHD result shows a low-frequency zonal flow existing for all p{sub ⊥}/p{sub ∥}, while according to the kinetic dispersion relation, no low-frequency branch exists for p{sub ⊥}/p{sub ∥}≳ 2.
Relation of the double-ITCZ bias to the atmospheric energy budget in climate models
NASA Astrophysics Data System (ADS)
Adam, Ori; Schneider, Tapio; Brient, Florent; Bischoff, Tobias
2016-07-01
We examine how tropical zonal mean precipitation biases in current climate models relate to the atmospheric energy budget. Both hemispherically symmetric and antisymmetric tropical precipitation biases contribute to the well-known double-Intertropical Convergence Zone (ITCZ) bias; however, they have distinct signatures in the energy budget. Hemispherically symmetric biases in tropical precipitation are proportional to biases in the equatorial net energy input; hemispherically antisymmetric biases are proportional to the atmospheric energy transport across the equator. Both relations can be understood within the framework of recently developed theories. Atmospheric net energy input biases in the deep tropics shape both the symmetric and antisymmetric components of the double-ITCZ bias. Potential causes of these energetic biases and their variation across climate models are discussed.
Modelling of unsaturated water flow in double porosity media. An integrated approach.
NASA Astrophysics Data System (ADS)
Lewandowska, J.
2009-04-01
"Multi-scale, multi-components, multi-phases" are the key words that characterize the double porosity media, like fissured rocks or aggregated soils, subject to geo-environmental conditions. In relation to this context we present an integrated upscaling approach to the modelling of unsaturated water flow in double porosity media. This approach combines three issues: theoretical, numerical and experimental. In the theoretical part, the macroscopic model is derived by using the asymptotic homogenization method. It is assumed that the microstructure of the medium is composed of two porous domains of contrasted hydraulic parameters (macro- and micro-porosity), so that the water capillary pressure reaches equilibrium much faster in the highly than in the weakly conducting domain. Consequently, large local-scale pressure gradients arise, which significantly influence the macroscopic behaviour of the medium (local non-equilibrium). In this case, the macroscopic model consists of two coupled non-linear equations that have to be solved simultaneously. The homogenization model offers a complete description of the problem, including the definition of the effective parameters (in a general case anisotropic) and the domain of validity of the model. By the latter term we understand the set of underlying assumptions on the microstructure of the medium, the considered spatial and time scales, and the relations between the local hydraulic parameters and the forces driving the flow. All these assumptions are explicitly introduced via the estimation of the dimensionless parameters and the formulation of the appropriate boundary and interface conditions at the microscopic scale. For practical applications the model was generalized to take into account all possible situations (and the appropriate models) that can occurs during a flow process (local equilibrium/local non-equilibrium). The numerical implementation of double-porosity model requires a particular strategy, allowing for the
Double Folding Potential of Different Interaction Models for 16O + 12C Elastic Scattering
NASA Astrophysics Data System (ADS)
Hamada, Sh.; Bondok, I.; Abdelmoatmed, M.
2016-08-01
The elastic scattering angular distributions for 16O + 12C nuclear system have been analyzed using double folding potential of different interaction models: CDM3Y1, CDM3Y6, DDM3Y1 and BDM3Y1. We have extracted the renormalization factor N r for the different concerned interaction models. Potential created by BDM3Y1 model of interaction has the shallowest depth which reflects the necessity to use higher renormalization factor. The experimental angular distributions for 16O + 12C nuclear system in the energy range 115.9-230 MeV exhibited unmistakable refractive features and rainbow phenomenon.
NASA Astrophysics Data System (ADS)
Yuxiong, Cao; Zhi, Jin; Ji, Ge; Yongbo, Su; Xinyu, Liu
2009-12-01
A self-built accurate and flexible large-signal model based on an analysis of the characteristics of InP double heterojunction bipolar transistors (DHBTs) is implemented as a seven-port symbolically defined device (SDD) in Agilent ADS. The model accounts for most physical phenomena including the self-heating effect, Kirk effect, soft knee effect, base collector capacitance and collector transit time. The validity and the accuracy of the large-signal model are assessed by comparing the simulation with the measurement of DC, multi-bias small signal S parameters for InP DHBTs.
Single-trial EEG RSVP classification using convolutional neural networks
NASA Astrophysics Data System (ADS)
Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William
2016-05-01
Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.
Role of Double-Porosity Dual-Permeability Models for Multi-Resonance Geomechanical Systems
Berryman, J G
2005-05-18
It is known that Biot's equations of poroelasticity (Biot 1956; 1962) follow from a scale-up of the microscale equations of elasticity coupled to the Navier-Stokes equations for fluid flow (Burridge and Keller, 1981). Laboratory measurements by Plona (1980) have shown that Biot's equations indeed hold for simple systems (Berryman, 1980), but heterogeneous systems can have quite different behavior (Berryman, 1988). So the question arises whether there is one level--or perhaps many levels--of scale-up needed to arrive at equations valid for the reservoir scale? And if so, do these equations take the form of Biot's equations or some other form? We will discuss these issues and show that the double-porosity dual-permeability equations (Berryman and Wang, 1995; Berryman and Pride, 2002; Pride and Berryman, 2003a,b; Pride et al., 2004) play a special role in the scale-up to equations describing multi-resonance reservoir behavior, for fluid pumping and geomechanics, as well as seismic wave propagation. The reason for the special significance of double-porosity models is that a multi-resonance system can never be adequately modeled using a single resonance model, but can often be modeled with reasonable accuracy using a two-resonance model. Although ideally one would prefer to model multi-resonance systems using the correct numbers, locations, widths, and amplitudes of the resonances, data are often inadequate to resolve all these pertinent model parameters in this complex inversion task. When this is so, the double-porosity model is most useful as it permits us to capture the highest and lowest detectable resonances of the system and then to interpolate through the middle range of frequencies.
Experiments and Modeling of Boric Acid Permeation through Double-Skinned Forward Osmosis Membranes.
Luo, Lin; Zhou, Zhengzhong; Chung, Tai-Shung; Weber, Martin; Staudt, Claudia; Maletzko, Christian
2016-07-19
Boron removal is one of the great challenges in modern wastewater treatment, owing to the unique small size and fast diffusion rate of neutral boric acid molecules. As forward osmosis (FO) membranes with a single selective layer are insufficient to reject boron, double-skinned FO membranes with boron rejection up to 83.9% were specially designed for boron permeation studies. The superior boron rejection properties of double-skinned FO membranes were demonstrated by theoretical calculations, and verified by experiments. The double-skinned FO membrane was fabricated using a sulfonated polyphenylenesulfone (sPPSU) polymer as the hydrophilic substrate and polyamide as the selective layer material via interfacial polymerization on top and bottom surfaces. A strong agreement between experimental data and modeling results validates the membrane design and confirms the success of model prediction. The effects of key parameters on boron rejection, such as boron permeability of both selective layers and structure parameter, were also investigated in-depth with the mathematical modeling. This study may provide insights not only for boron removal from wastewater, but also open up the design of next generation FO membranes to eliminate low-rejection molecules in wider applications.
Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit
NASA Technical Reports Server (NTRS)
Smith, Robert A.
1987-01-01
The evolution and long-time stability of a double layer (DL) in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double layer potential structure. A simple model is presented in which this current redistribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double layer potential. The flank charging may be represented as that of a nonlinear transmission line. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a one-dimensional simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.
Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit
NASA Technical Reports Server (NTRS)
Smith, Robert A.
1987-01-01
The evolution and long-time stability of a double layer in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double-layer potential structure. A simple model is presented in which this current re-distribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double-layer potential. The flank charging may be represented as that of a nonlinear transmission. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a 1-d simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.
Experiments and Modeling of Boric Acid Permeation through Double-Skinned Forward Osmosis Membranes.
Luo, Lin; Zhou, Zhengzhong; Chung, Tai-Shung; Weber, Martin; Staudt, Claudia; Maletzko, Christian
2016-07-19
Boron removal is one of the great challenges in modern wastewater treatment, owing to the unique small size and fast diffusion rate of neutral boric acid molecules. As forward osmosis (FO) membranes with a single selective layer are insufficient to reject boron, double-skinned FO membranes with boron rejection up to 83.9% were specially designed for boron permeation studies. The superior boron rejection properties of double-skinned FO membranes were demonstrated by theoretical calculations, and verified by experiments. The double-skinned FO membrane was fabricated using a sulfonated polyphenylenesulfone (sPPSU) polymer as the hydrophilic substrate and polyamide as the selective layer material via interfacial polymerization on top and bottom surfaces. A strong agreement between experimental data and modeling results validates the membrane design and confirms the success of model prediction. The effects of key parameters on boron rejection, such as boron permeability of both selective layers and structure parameter, were also investigated in-depth with the mathematical modeling. This study may provide insights not only for boron removal from wastewater, but also open up the design of next generation FO membranes to eliminate low-rejection molecules in wider applications. PMID:27280490
Flow visualization around a double wedge airfoil model with focusing schlieren system
NASA Astrophysics Data System (ADS)
Kashitani, Masashi; Yamaguchi, Yutaka
2006-03-01
In the present study, aerodynamic characteristics of the double wedge airfoil model were investigated in a transonic flow by using the shock tube as an intermittent wind tunnel. The driver and driven gases of the shock tube are dry air. The airfoil model of double wedge has the span of 58 mm, chord length c = 75 mm and its maximum thickness is 7.5 mm. The apex of the double wedge airfoil model is located on the 35% chord length from the leading edge. The range of hot gas Mach numbers are from 0.80 to 0.88, and the Reynolds numbers based on chord length are 3.11 × 105 ˜ 3.49 × 105, respectively. The flow visualizations were performed by the sharp focusing schlieren method which can visualize the three dimensional flow fields. The results show that the present system can visualize the transonic flowfield clearer than the previous system, and the shock wave profiles of the center of span in the test section are visualized
Double-blind comparison of survival analysis models using a bespoke web system.
Taktak, A F G; Setzkorn, C; Damato, B E
2006-01-01
The aim of this study was to carry out a comparison of different linear and non-linear models from different centres on a common dataset in a double-blind manner to eliminate bias. The dataset was shared over the Internet using a secure bespoke environment called geoconda. Models evaluated included: (1) Cox model, (2) Log Normal model, (3) Partial Logistic Spline, (4) Partial Logistic Artificial Neural Network and (5) Radial Basis Function Networks. Graphical analysis of the various models with the Kaplan-Meier values were carried out in 3 survival groups in the test set classified according to the TNM staging system. The discrimination value for each model was determined using the area under the ROC curve. Results showed that the Cox model tended towards optimism whereas the partial logistic Neural Networks showed slight pessimism.
The role of convective model choice in calculating the climate impact of doubling CO2
NASA Technical Reports Server (NTRS)
Lindzen, R. S.; Hou, A. Y.; Farrell, B. F.
1982-01-01
The role of the parameterization of vertical convection in calculating the climate impact of doubling CO2 is assessed using both one-dimensional radiative-convective vertical models and in the latitude-dependent Hadley-baroclinic model of Lindzen and Farrell (1980). Both the conventional 6.5 K/km and the moist-adiabat adjustments are compared with a physically-based, cumulus-type parameterization. The model with parameterized cumulus convection has much less sensitivity than the 6.5 K/km adjustment model at low latitudes, a result that can be to some extent imitiated by the moist-adiabat adjustment model. However, when averaged over the globe, the use of the cumulus-type parameterization in a climate model reduces sensitivity only approximately 34% relative to models using 6.5 K/km convective adjustment. Interestingly, the use of the cumulus-type parameterization appears to eliminate the possibility of a runaway greenhouse.
Schulze-Halberg, Axel E-mail: xbataxel@gmail.com; Wang, Jie
2015-07-15
We obtain series solutions, the discrete spectrum, and supersymmetric partners for a quantum double-oscillator system. Its potential features a superposition of the one-parameter Mathews-Lakshmanan interaction and a one-parameter harmonic or inverse harmonic oscillator contribution. Furthermore, our results are transferred to a generalized Pöschl-Teller model that is isospectral to the double-oscillator system.
NASA Astrophysics Data System (ADS)
Singh, A.; Karsten, A.
2011-06-01
The accuracy of the calibration model for the single and double integrating sphere systems are compared for a white light system. A calibration model is created from a matrix of samples with known absorption and reduced scattering coefficients. In this instance the samples are made using different concentrations of intralipid and black ink. The total and diffuse transmittance and reflectance is measured on both setups and the accuracy of each model compared by evaluating the prediction errors of the calibration model for the different systems. Current results indicate that the single integrating sphere setup is more accurate than the double system method. This is based on the low prediction errors of the model for the single sphere system for a He-Ne laser as well as a white light source. The model still needs to be refined for more absorption factors. Tests on the prediction accuracies were then determined by extracting the optical properties of solid resin based phantoms for each system. When these properties of the phantoms were used as input to the modeling software excellent agreement between measured and simulated data was found for the single sphere systems.
Die and telescoping punch form convolutions in thin diaphragm
NASA Technical Reports Server (NTRS)
1965-01-01
Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
Reply to 'Comment on 'Quantum convolutional error-correcting codes''
Chau, H.F.
2005-08-15
In their Comment, de Almeida and Palazzo [Phys. Rev. A 72, 026301 (2005)] discovered an error in my earlier paper concerning the construction of quantum convolutional codes [Phys. Rev. A 58, 905 (1998)]. This error can be repaired by modifying the method of code construction.
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374
2016-01-01
DNA double-strand breaks are lesions that form during metabolism, DNA replication and exposure to mutagens. When a double-strand break occurs one of a number of repair mechanisms is recruited, all of which have differing propensities for mutational events. Despite DNA repair being of crucial importance, the relative contribution of these mechanisms and their regulatory interactions remain to be fully elucidated. Understanding these mutational processes will have a profound impact on our knowledge of genomic instability, with implications across health, disease and evolution. Here we present a new method to model the combined activation of non-homologous end joining, single strand annealing and alternative end joining, following exposure to ionising radiation. We use Bayesian statistics to integrate eight biological data sets of double-strand break repair curves under varying genetic knockouts and confirm that our model is predictive by re-simulating and comparing to additional data. Analysis of the model suggests that there are at least three disjoint modes of repair, which we assign as fast, slow and intermediate. Our results show that when multiple data sets are combined, the rate for intermediate repair is variable amongst genetic knockouts. Further analysis suggests that the ratio between slow and intermediate repair depends on the presence or absence of DNA-PKcs and Ku70, which implies that non-homologous end joining and alternative end joining are not independent. Finally, we consider the proportion of double-strand breaks within each mechanism as a time series and predict activity as a function of repair rate. We outline how our insights can be directly tested using imaging and sequencing techniques and conclude that there is evidence of variable dynamics in alternative repair pathways. Our approach is an important step towards providing a unifying theoretical framework for the dynamics of DNA repair processes. PMID:27741226
NASA Astrophysics Data System (ADS)
Campolina, Bruno L.
The prediction of aircraft interior noise involves the vibroacoustic modelling of the fuselage with noise control treatments. This structure is composed of a stiffened metallic or composite panel, lined with a thermal and acoustic insulation layer (glass wool), and structurally connected via vibration isolators to a commercial lining panel (trim). The goal of this work aims at tailoring the noise control treatments taking design constraints such as weight and space optimization into account. For this purpose, a representative aircraft double-wall is modelled using the Statistical Energy Analysis (SEA) method. Laboratory excitations such as diffuse acoustic field and point force are addressed and trends are derived for applications under in-flight conditions, considering turbulent boundary layer excitation. The effect of the porous layer compression is firstly addressed. In aeronautical applications, compression can result from the installation of equipment and cables. It is studied analytically and experimentally, using a single panel and a fibrous uniformly compressed over 100% of its surface. When compression increases, a degradation of the transmission loss up to 5 dB for a 50% compression of the porous thickness is observed mainly in the mid-frequency range (around 800 Hz). However, for realistic cases, the effect should be reduced since the compression rate is lower and compression occurs locally. Then the transmission through structural connections between panels is addressed using a four-pole approach that links the force-velocity pair at each side of the connection. The modelling integrates experimental dynamic stiffness of isolators, derived using an adapted test rig. The structural transmission is then experimentally validated and included in the double-wall SEA model as an equivalent coupling loss factor (CLF) between panels. The tested structures being flat, only axial transmission is addressed. Finally, the dominant sound transmission paths are
NASA Astrophysics Data System (ADS)
Tran Ngoc, T.; Lewandowska, J.; Vauclin, M.; Bertin, H.; Gentier, S.
2009-12-01
The complex processes of water flow and solute transport occurring in subsurface environment have to be well modeled in order to be able to protect the water aquifers against contamination, for security of nuclear waste depositories or CO2 sequestration, in the problem of extraction of geothermal energy. Since natural geological formations are often heterogeneous at different scales, it leads to preferential flow and transport observed in the breakthrough curves which is difficult to model. In such a case the concept of “double-porosity medium” originally introduced by Barenblatt et al. (1960), can be used. In this paper it was applied to a class of heterogeneous media (aggregated soils, fractured porous rocks) in which a strong contrast in the local pore size characteristics is manifested. It was assumed that the interactions/exchanges between the macro- and micro-porosity are responsible for solute spreading in the local non equilibrium conditions and contribute to the non Fickian behaviour. This study presents a macroscopic dispersion model associated with the unsaturated water flow, which was developped using the asymptotic homogenization method. This model consists of two equations describing the processes of solute transfer in the macro- and micro-porosity domains. A coupling between two concentration fields can be seen in the model, which gives an early breakthrough and a long tail effect. In order to enable the two-scale computations, the model was implemented using the commercial code COMSOL Multiphysics®. A particular strategy was proposed to take into account the micro-macro coupling. Finally, a series of experiments of tracer dispersion in a double-porosity physical model was performed under unsaturated steady-state flow conditions. The double-porosity medium presenting the periodic microstructure was composed of a regular assemblage between sintered clayey spheres and a fine sand. The model validation was carried out in two different stages. In
Double-stranded DNA organization in bacteriophage heads: An alternative toroid-based model
Hud, N.V.
1995-10-01
Studies of the organization of double-stranded DNA within bacteriophage heads during the past four decades have produced a wealth of data. However, despite the presentation of numerous models, the true organization of DNA within phage heads remains unresolved. The observations of toroidal DNA structures in electron micrographs of phage lysates have long been cited as support for the organization of DNA in a spool-like fashion. This particular model, like all other models, has not been found to be consistent with all available data. Recently, the authors proposed that DNA within toroidal condensates produced in vitro is organized in a manner significantly different from that suggested by the spool model. This new toroid model has allowed the development of an alternative model for DNA organization within bacteriophage heads that is consistent with a wide range of biophysical data. Here the authors propose that bacteriophage DNA is packaged in a toroid that is folded into a highly compact structure.
Text-Attentional Convolutional Neural Network for Scene Text Detection.
He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian
2016-06-01
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results. PMID:27093723
Text-Attentional Convolutional Neural Network for Scene Text Detection.
He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian
2016-06-01
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.
Text-Attentional Convolutional Neural Network for Scene Text Detection
NASA Astrophysics Data System (ADS)
He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian
2016-06-01
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.
A convolutional neural network approach for objective video quality assessment.
Le Callet, Patrick; Viard-Gaudin, Christian; Barba, Dominique
2006-09-01
This paper describes an application of neural networks in the field of objective measurement method designed to automatically assess the perceived quality of digital videos. This challenging issue aims to emulate human judgment and to replace very complex and time consuming subjective quality assessment. Several metrics have been proposed in literature to tackle this issue. They are based on a general framework that combines different stages, each of them addressing complex problems. The ambition of this paper is not to present a global perfect quality metric but rather to focus on an original way to use neural networks in such a framework in the context of reduced reference (RR) quality metric. Especially, we point out the interest of such a tool for combining features and pooling them in order to compute quality scores. The proposed approach solves some problems inherent to objective metrics that should predict subjective quality score obtained using the single stimulus continuous quality evaluation (SSCQE) method. This latter has been adopted by video quality expert group (VQEG) in its recently finalized reduced referenced and no reference (RRNR-TV) test plan. The originality of such approach compared to previous attempts to use neural networks for quality assessment, relies on the use of a convolutional neural network (CNN) that allows a continuous time scoring of the video. Objective features are extracted on a frame-by-frame basis on both the reference and the distorted sequences; they are derived from a perceptual-based representation and integrated along the temporal axis using a time-delay neural network (TDNN). Experiments conducted on different MPEG-2 videos, with bit rates ranging 2-6 Mb/s, show the effectiveness of the proposed approach to get a plausible model of temporal pooling from the human vision system (HVS) point of view. More specifically, a linear correlation criteria, between objective and subjective scoring, up to 0.92 has been obtained on
Stochastic Time-lapse Seismic Inversion with a Hybrid Starting Model and Double-difference Data
NASA Astrophysics Data System (ADS)
Tao, Y.; Sen, M. K.; Zhang, R.; Spikes, K.
2012-12-01
We propose a robust stochastic time-lapse seismic inversion strategy with an application of monitoring a CO2 injection site. This workflow involves a baseline inversion using a hybrid starting model that combines a fractal prior and the low-frequency prior from well log data. This starting model extracts fractal statistics of the well data to provide an estimate of the null space. A second step of this workflow is to use a double-difference inversion scheme to focus on the local areas where time-lapse changes have occurred as a result of injecting CO2 into the reservoir. For this step, simulated data using the inverted prior from the baseline model and the difference between the baseline and repeat data are summed to produce the virtual repeat data. We use an error function that incorporates the model norms to regularize the inversion process. The seismic data are pre-processed using a local correlation based warping method to register different time-lapse datasets. The stochastic optimization method used here is very fast simulated annealing, where the updated model parameters are drawn from a temperature dependent Cauchy-like perturbation of current model parameters. Synthetic data show that double-difference inversion shows better result than a conventional two-pass approach. Inverted field data from Cranfield site shows time-lapse impedance changes that are consistent with CO2 injection effects.
NASA Technical Reports Server (NTRS)
Desai, S. D.; Yuan, D. -N.
2006-01-01
A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.
Classical mapping for Hubbard operators: application to the double-Anderson model.
Li, Bin; Miller, William H; Levy, Tal J; Rabani, Eran
2014-05-28
A classical Cartesian mapping for Hubbard operators is developed to describe the nonequilibrium transport of an open quantum system with many electrons. The mapping of the Hubbard operators representing the many-body Hamiltonian is derived by using analogies from classical mappings of boson creation and annihilation operators vis-à-vis a coherent state representation. The approach provides qualitative results for a double quantum dot array (double Anderson impurity model) coupled to fermionic leads for a range of bias voltages, Coulomb couplings, and hopping terms. While the width and height of the conduction peaks show deviations from the master equation approach considered to be accurate in the limit of weak system-leads couplings and high temperatures, the Hubbard mapping captures all transport channels involving transition between many electron states, some of which are not captured by approximate nonequilibrium Green function closures.
A double layer model for solar X-ray and microwave pulsations
NASA Technical Reports Server (NTRS)
Tapping, K. F.
1986-01-01
The wide range of wavelengths over which quasi-periodic pulsations have been observed suggests that the mechanism causing them acts upon the supply of high energy electrons driving the emission processes. A model is described which is based upon the radial shrinkage of a magnetic flux tube. The concentration of the current, along with the reduction in the number of available charge carriers, can rise to a condition where the current demand exceeds the capacity of the thermal electrons. Driven by the large inductance of the external current circuit, an instability takes place in the tube throat, resulting in the formation of a potential double layer, which then accelerates electrons and ions to MeV energies. The double layer can be unstable, collapsing and reforming repeatedly. The resulting pulsed particle beams give rise to pulsating emission which are observed at radio and X-ray wavelengths.
Double-differential spectra of the secondary particles in the frame of pre-equilibrium model
NASA Astrophysics Data System (ADS)
Fotina, O. V.; Kravchuk, V. L.; Barlini, S.; Gramegna, F.; Eremenko, D. O.; Parfenova, Yu. L.; Platonov, S. Yu.; Yuminov, O. A.; Bruno, M.; D'Agostino, M.; Casini, G.; Wieland, O.; Bracco, A.; Camera, F.
2010-08-01
An approach was developed to describe the double-differential spectra of secondary particles formed in heavy-ion reactions. Griffin model of nonequilibrium processes was used to account for the nonequilibrium stage of the compound system formation. Simulation of de-excitation of the compound system was carried out using the Monte-Carlo method. Analysis of the probability of neutron, proton, and α-particle emission was performed both in equilibrium, and in the pre-equilibrium stages of the process. Fission and γ-ray emission were also considered after equilibration. The analysis of the experimental data on the double-differential cross sections of p, α particles for the 16O + 116Sn reaction at the oxygen energy E = 130 and 250 MeV were performed.
Simulation of double layers in a model auroral circuit with nonlinear impedance
NASA Technical Reports Server (NTRS)
Smith, R. A.
1986-01-01
A reduced circuit description of the U-shaped potential structure of a discrete auroral arc, consisting of the flank transmission line plus parallel-electric-field region, is used to provide the boundary condition for one-dimensional simulations of the double-layer evolution. The model yields asymptotic scalings of the double-layer potential, as a function of an anomalous transport coefficient alpha and of the perpendicular length scale l(a) of the arc. The arc potential phi(DL) scales approximately linearly with alpha, and for alpha fixed phi (DL) about l(a) to the z power. Using parameters appropriate to the auroral zone acceleration region, potentials of phi (DPL) 10 kV scale to projected ionospheric dimensions of about 1 km, with power flows of the order of magnitude of substorm dissipation rates.
Refined modeling of superconducting double helical coils using finite element analyses
NASA Astrophysics Data System (ADS)
Farinon, S.; Fabbricatore, P.
2012-06-01
Double helical coils are becoming more and more attractive for accelerator magnets and other applications. Conceptually, a sinusoidal modulation of the longitudinal position of the turns allows virtually any multipolar field to be produced and maximizes the effectiveness of the supplied ampere turns. Being intrinsically three-dimensional, the modeling of such structures is very complicated, and several approaches, with different degrees of complexity, can be used. In this paper we present various possibilities for solving the magnetostatic problem of a double helical coil, through both finite element analyses and direct integration of the Biot-Savart law, showing the limits and advantages of each solution and the corresponding information which can be derived.
Kinetic model for an auroral double layer that spans many gravitational scale heights
Robertson, Scott
2014-12-15
The electrostatic potential profile and the particle densities of a simplified auroral double layer are found using a relaxation method to solve Poisson's equation in one dimension. The electron and ion distribution functions for the ionosphere and magnetosphere are specified at the boundaries, and the particle densities are found from a collisionless kinetic model. The ion distribution function includes the gravitational potential energy; hence, the unperturbed ionospheric plasma has a density gradient. The plasma potential at the upper boundary is given a large negative value to accelerate electrons downward. The solutions for a wide range of dimensionless parameters show that the double layer forms just above a critical altitude that occurs approximately where the ionospheric density has fallen to the magnetospheric density. Below this altitude, the ionospheric ions are gravitationally confined and have the expected scale height for quasineutral plasma in gravity.
Generalized model of double random phase encoding based on linear algebra
NASA Astrophysics Data System (ADS)
Nakano, Kazuya; Takeda, Masafumi; Suzuki, Hiroyuki; Yamaguchi, Masahiro
2013-01-01
We propose a generalized model for double random phase encoding (DRPE) based on linear algebra. We defined the DRPE procedure in six steps. The first three steps form an encryption procedure, while the later three steps make up a decryption procedure. We noted that the first (mapping) and second (transform) steps can be generalized. As an example of this generalization, we used 3D mapping and a transform matrix, which is a combination of a discrete cosine transform and two permutation matrices. Finally, we investigated the sensitivity of the proposed model to errors in the decryption key.
New non-equilibrium matrix imbibition equation for double porosity model
NASA Astrophysics Data System (ADS)
Konyukhov, Andrey; Pankratov, Leonid
2016-07-01
The paper deals with the global Kondaurov double porosity model describing a non-equilibrium two-phase immiscible flow in fractured-porous reservoirs when non-equilibrium phenomena occur in the matrix blocks, only. In a mathematically rigorous way, we show that the homogenized model can be represented by usual equations of two-phase incompressible immiscible flow, except for the addition of two source terms calculated by a solution to a local problem being a boundary value problem for a non-equilibrium imbibition equation given in terms of the real saturation and a non-equilibrium parameter.
Single and double shock initiation modelling for high explosive materials in last three decades
NASA Astrophysics Data System (ADS)
Hussain, T.; Yan, Liu
2016-08-01
The explosives materials are normally in an energetically metastable state. These can undergo rapid chemical decomposition only if sufficient energy has first been added to get the process started. Such energy can be provided by shocks. To predict the response of these materials under impacts of shocks of different strengths and durations and at various conditions, mathematical models are used. During the last three decades, a lot of research has been carried out and several shock initiation models have been presented. The models can be divided into continuum based and physics based models. In this study the single and double shock initiation models presented in last three decades have been reviewed and the ranges of their application has been discussed.
Ambient modal testing of a double-arch dam: the experimental campaign and model updating
NASA Astrophysics Data System (ADS)
García-Palacios, Jaime H.; Soria, José M.; Díaz, Iván M.; Tirado-Andrés, Francisco
2016-09-01
A finite element model updating of a double-curvature-arch dam (La Tajera, Spain) is carried out hereof using the modal parameters obtained from an operational modal analysis. That is, the system modal dampings, natural frequencies and mode shapes have been identified using output-only identification techniques under environmental loads (wind, vehicles). A finite element model of the dam-reservoir-foundation system was initially created. Then, a testing campaing was then carried out from the most significant test points using high-sensitivity accelerometers wirelessly synchronized. Afterwards, the model updating of the initial model was done using a Monte Carlo based approach in order to match it to the recorded dynamic behaviour. The updated model may be used within a structural health monitoring system for damage detection or, for instance, for the analysis of the seismic response of the arch dam- reservoir-foundation coupled system.
Fabrication of double-walled section models of the ITER vacuum vessel
Koizumi, K.; Kanamori, N.; Nakahira, M.; Itoh, Y.; Horie, M.; Tada, E.; Shimamoto, S.
1995-12-31
Trial fabrication of double-walled section models has been performed at Japan Atomic Energy Research Institute (JAERI) for the construction of ITER vacuum vessel. By employing TIG (Tungsten-arc Inert Gas) welding and EB (Electron Beam) welding, for each model, two full-scaled section models of 7.5 {degree} toroidal sector in the curved section at the bottom of vacuum vessel have been successfully fabricated with the final dimensional error of within {+-}5 mm to the nominal values. The sufficient technical database on the candidate fabrication procedures, welding distortion and dimensional stability of full-scaled models have been obtained through the fabrications. This paper describes the design and fabrication procedures of both full-scaled section models and the major results obtained through the fabrication.
NASA Astrophysics Data System (ADS)
Lin, Zer-Ming; Lin, Horng-Chih; Liu, Keng-Ming; Huang, Tiao-Yuan
2012-02-01
In this study, we derive an analytical model of an electric potential of a double-gated (DG) fully depleted (FD) junctionless (J-less) transistor by solving the two-dimensional Poisson's equation. On the basis of this two-dimensional electric potential model, subthreshold current and swing can be calculated. Threshold voltage roll-off can also be estimated with analytical forms derived using the above model. The calculated results of electric potential, subthreshold current and threshold voltage roll-off are all in good agreement with the results of technology computer aided design (TCAD) simulation. The model proposed in this paper may help in the development of a compact model for simulation program with integrated circuit emphasis (SPICE) simulation and in providing deeper insights into the characteristics of short-channel J-less transistors.
A canine hybrid double-bundle model for study of arthroscopic ACL reconstruction.
Cook, James L; Smith, Patrick A; Stannard, James P; Pfeiffer, Ferris M; Kuroki, Keiichi; Bozynski, Chantelle C; Cook, Cristi R
2015-08-01
Development and validation of a large animal model for pre-clinical studies of intra-articular anterior cruciate ligament (ACL) reconstruction that addresses current limitations is highly desirable. The objective of the present study was to investigate a translational canine model for ACL reconstruction. With institutional approval, adult research hounds underwent arthroscopic debridement of the anteromedial bundle (AMB) of the ACL, and then either received a tendon autograft for "hybrid double-bundle" ACL reconstruction (n = 12) or no graft to remain ACL/AMB-deficient (n = 6). Contralateral knees were used as non-operated controls (n = 18) and matched canine cadaveric knees were used as biomechanical controls (n = 6). Dogs were assessed using functional, diagnostic imaging, gross, biomechanical, and histologic outcome measures required for pre-clinical animal models. The data suggest that this canine model was able to overcome the major limitations of large animal models used for translational research in ACL reconstruction and closely follow clinical aspects of human ACL reconstruction. The "hybrid double-bundle" ACL reconstruction allowed for sustained knee function without the development of osteoarthritis and for significantly improved functional, diagnostic imaging, gross, biomechanical, and histologic outcomes in grafted knees compared to ACL/AMB-deficient knees.
Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding
Johnson, Rie; Zhang, Tong
2016-01-01
This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks. PMID:27087766
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
NASA Astrophysics Data System (ADS)
Ma, Shutian; Eaton, David W.
2011-05-01
Precise and accurate earthquake hypocentres are critical for various fields, such as the study of tectonic process and seismic-hazard assessment. Double-difference relocation methods are widely used and can dramatically improve the precision of event relative locations. In areas of sparse seismic network coverage, however, a significant trade-off exists between focal depth, epicentral location and the origin time. Regional depth-phase modelling (RDPM) is suitable for sparse networks and can provide focal-depth information that is relatively insensitive to uncertainties in epicentral location and independent of errors in the origin time. Here, we propose a hybrid method in which focal depth is determined using RDPM and then treated as a fixed parameter in subsequent double-difference calculations, thus reducing the size of the system of equations and increasing the precision of the hypocentral solutions. Based on examples using small earthquakes from eastern Canada and southwestern USA, we show that the application of this technique yields solutions that appear to be more robust and accurate than those obtained by standard double-difference relocation method alone.
Analytical model of LDMOS with a double step buried oxide layer
NASA Astrophysics Data System (ADS)
Yuan, Song; Duan, Baoxing; Cao, Zhen; Guo, Haijun; Yang, Yintang
2016-09-01
In this paper, a two-dimensional analytical model is established for the Buried Oxide Double Step Silicon On Insulator structure proposed by the authors. Based on the two-dimensional Poisson equation, the analytic expressions of the surface electric field and potential distributions for the device are achieved. In the BODS (Buried Oxide Double Step Silicon On Insulator) structure, the buried oxide layer thickness changes stepwise along the drift region, and the positive charge in the drift region can be accumulated at the corner of the step. These accumulated charge function as the space charge in the depleted drift region. At the same time, the electric field in the oxide layer also varies with the different drift region thickness. These variations especially the accumulated charge will modulate the surface electric field distribution through the electric field modulation effects, which makes the surface electric field distribution more uniform. As a result, the breakdown voltage of the device is improved by 30% compared with the conventional SOI structure. To verify the accuracy of the analytical model, the device simulation software ISE TCAD is utilized, the analytical values are in good agreement with the simulation results by the simulation software. That means the established two-dimensional analytical model for BODS structure is valid, and it also illustrates the breakdown voltage enhancement by the electric field modulation effect sufficiently. The established analytical models will provide the physical and mathematical basis for further analysis of the new power devices with the patterned buried oxide layer.
NASA Astrophysics Data System (ADS)
Verma, Jay Hind Kumar; Haldar, Subhasis; Gupta, R. S.; Gupta, Mridula
2015-12-01
In this paper, a physics based model has been presented for the Cylindrical Surrounding Double Gate (CSDG) Nano-wire MOSFET. The analytical model is based on the solution of 2-D Poisson's equation in a cylindrical coordinate system using super-position technique. CSDG MOSFET is a cylindrical version of double gate MOSFET which offers maximum gate controllability over the channel. It consists of an inner gate and an outer gate. These gates render effective charge control inside the channel and also provide excellent immunity to short channel effects. Surface potential and electric field for inner and outer gate are derived. The impact of channel length on electrical characteristics of CSDG MOSFET is analysed and verified using ATLAS device simulator. The model is also extended for threshold voltage modelling using extrapolation method in strong inversion region. Drain current and transconductance are compared with conventional Cylindrical Surrounding Gate (CSG) MOSFET. The excellent electrical performance makes CSDG MOSFET promising candidates to extend CMOS scaling roadmap beyond CSG MOSFET.
Guo, Hui; He, Youwei; Li, Lei; Du, Song; Cheng, Shiqing
2014-01-01
This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR). PMID:25302335
Development of a model for flaming combustion of double-wall corrugated cardboard
NASA Astrophysics Data System (ADS)
McKinnon, Mark B.
Corrugated cardboard is used extensively in a storage capacity in warehouses and frequently acts as the primary fuel for accidental fires that begin in storage facilities. A one-dimensional numerical pyrolysis model for double-wall corrugated cardboard was developed using the Thermakin modeling environment to describe the burning rate of corrugated cardboard. The model parameters corresponding to the thermal properties of the corrugated cardboard layers were determined through analysis of data collected in cone calorimeter tests conducted with incident heat fluxes in the range 20--80 kW/m 2. An apparent pyrolysis reaction mechanism and thermodynamic properties for the material were obtained using thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC). The fully-parameterized bench-scale model predicted burning rate profiles that were in agreement with the experimental data for the entire range of incident heat fluxes, with more consistent predictions at higher heat fluxes.
A two-dimensional analytical model for short channel junctionless double-gate MOSFETs
NASA Astrophysics Data System (ADS)
Jiang, Chunsheng; Liang, Renrong; Wang, Jing; Xu, Jun
2015-05-01
A physics-based analytical model of electrostatic potential for short-channel junctionless double-gate MOSFETs (JLDGMTs) operated in the subthreshold regime is proposed, in which the full two-dimensional (2-D) Poisson's equation is solved in channel region by a method of series expansion similar to Green's function. The expression of the proposed electrostatic potential is completely rigorous and explicit. Based on this expression, analytical models of threshold voltage, subthreshold swing, and subthreshold drain current for JLDGMTs were derived. Subthreshold behavior was studied in detail by changing different device parameters and bias conditions, including doping concentration, channel thickness, gate length, gate oxide thickness, drain voltage, and gate voltage. Results predicted by all the analytical models agree well with numerical solutions from the 2-D simulator. These analytical models can be used to investigate the operating mechanisms of nanoscale JLDGMTs and to optimize their device performance.
Numerical modeling of Subthreshold region of junctionless double surrounding gate MOSFET (JLDSG)
NASA Astrophysics Data System (ADS)
Rewari, Sonam; Haldar, Subhasis; Nath, Vandana; Deswal, S. S.; Gupta, R. S.
2016-02-01
In this paper, Numerical Model for Electric Potential, Subthreshold Current and Subthreshold Swing for Junctionless Double Surrounding Gate(JLDSG) MOSFEThas been developed using superposition method. The results have also been evaluated for different silicon film thickness, oxide film thickness and channel length. The numerical results so obtained are in good agreement with the simulated data. Also, the results of JLDSG MOSFET have been compared with the conventional Junctionless Surrounding Gate (JLSG) MOSFET and it is observed that JLDSG MOSFET has improved drain currents, transconductance, outputconductance, Transconductance Generation Factor (TGF) and Subthreshold Slope.
Asymmetric Quantum Transport in a Double-Stranded Kronig-Penney Model
NASA Astrophysics Data System (ADS)
Cheon, Taksu; Poghosyan, Sergey S.
2015-06-01
We introduce a double-stranded Kronig-Penney model and analyze its transport properties. Asymmetric fluxes between two strands with suddenly alternating localization patterns are found as the energy is varied. The zero-size limit of the internal lines connecting two strands is examined using quantum graph vertices with four edges. We also consider a two-dimensional Kronig-Penney lattice with two types of alternating layer with δ and δ' connections, and show the existence of energy bands in which the quantum flux can flow only in selected directions.
Experimental investigation of shock wave diffraction over a single- or double-sphere model
NASA Astrophysics Data System (ADS)
Zhang, L. T.; Wang, T. H.; Hao, L. N.; Huang, B. Q.; Chen, W. J.; Shi, H. H.
2016-03-01
In this study, the unsteady drag produced by the interaction of a shock wave with a single- and a double-sphere model is measured using imbedded accelerometers. The shock wave is generated in a horizontal circular shock tube with an inner diameter of 200 mm. The effect of the shock Mach number and the dimensionless distance between spheres is investigated. The time-history of the drag coefficient is obtained based on Fast Fourier Transformation (FFT) band-block filtering and polynomial fitting of the measured acceleration. The measured peak values of the drag coefficient, with the associated uncertainty, are reported.
Double pendulum model for a tennis stroke including a collision process
NASA Astrophysics Data System (ADS)
Youn, Sun-Hyun
2015-10-01
By means of adding a collision process between the ball and racket in the double pendulum model, we analyzed the tennis stroke. The ball and the racket system may be accelerated during the collision time; thus, the speed of the rebound ball does not simply depend on the angular velocity of the racket. A higher angular velocity sometimes gives a lower rebound ball speed. We numerically showed that the proper time-lagged racket rotation increased the speed of the rebound ball by 20%. We also showed that the elbow should move in the proper direction in order to add the angular velocity of the racket.
NASA Astrophysics Data System (ADS)
Bulla, M.; Sim, S. A.; Kromer, M.; Seitenzahl, I. R.; Fink, M.; Ciaraldi-Schoolmann, F.; Röpke, F. K.; Hillebrandt, W.; Pakmor, R.; Ruiter, A. J.; Taubenberger, S.
2016-10-01
Calculations of synthetic spectropolarimetry are one means to test multidimensional explosion models for Type Ia supernovae. In a recent paper, we demonstrated that the violent merger of a 1.1 and 0.9 M⊙ white dwarf binary system is too asymmetric to explain the low polarization levels commonly observed in normal Type Ia supernovae. Here, we present polarization simulations for two alternative scenarios: the sub-Chandrasekhar mass double-detonation and the Chandrasekhar mass delayed-detonation model. Specifically, we study a 2D double-detonation model and a 3D delayed-detonation model, and calculate polarization spectra for multiple observer orientations in both cases. We find modest polarization levels (<1 per cent) for both explosion models. Polarization in the continuum peaks at ˜0.1-0.3 per cent and decreases after maximum light, in excellent agreement with spectropolarimetric data of normal Type Ia supernovae. Higher degrees of polarization are found across individual spectral lines. In particular, the synthetic Si II λ6355 profiles are polarized at levels that match remarkably well the values observed in normal Type Ia supernovae, while the low degrees of polarization predicted across the O I λ7774 region are consistent with the non-detection of this feature in current data. We conclude that our models can reproduce many of the characteristics of both flux and polarization spectra for well-studied Type Ia supernovae, such as SN 2001el and SN 2012fr. However, the two models considered here cannot account for the unusually high level of polarization observed in extreme cases such as SN 2004dt.
Modeling and simulation study of novel Double Gate Ferroelectric Junctionless (DGFJL) transistor
NASA Astrophysics Data System (ADS)
Mehta, Hema; Kaur, Harsupreet
2016-09-01
In this work we have proposed an analytical model for Double Gate Ferroelectric Junctionless Transistor (DGFJL), a novel device, which incorporates the advantages of both Junctionless (JL) transistor and Negative Capacitance phenomenon. A complete drain current model has been developed by using Landau-Khalatnikov equation and parabolic potential approximation to analyze device behavior in different operating regions. It has been demonstrated that DGFJL transistor acts as a step-up voltage transformer and exhibits subthreshold slope values less than 60 mV/dec. In order to assess the advantages offered by the proposed device, extensive comparative study has been done with equivalent Double Gate Junctionless Transistor (DGJL) transistor with gate insulator thickness same as ferroelectric gate stack thickness of DGFJL transistor. It is shown that incorporation of ferroelectric layer can overcome the variability issues observed in JL transistors. The device has been studied over a wide range of parameters and bias conditions to comprehensively investigate the device design guidelines to obtain a better insight into the application of DGFJL as a potential candidate for future technology nodes. The analytical results so derived from the model have been verified with simulated results obtained using ATLAS TCAD simulator and a good agreement has been found.
Nuclear mean field and double-folding model of the nucleus-nucleus optical potential
NASA Astrophysics Data System (ADS)
Khoa, Dao T.; Phuc, Nguyen Hoang; Loan, Doan Thi; Loc, Bui Minh
2016-09-01
Realistic density dependent CDM3Yn versions of the M3Y interaction have been used in an extended Hartree-Fock (HF) calculation of nuclear matter (NM), with the nucleon single-particle potential determined from the total NM energy based on the Hugenholtz-van Hove theorem that gives rise naturally to a rearrangement term (RT). Using the RT of the single-nucleon potential obtained exactly at different NM densities, the density and energy dependence of the CDM3Yn interactions was modified to account properly for both the RT and observed energy dependence of the nucleon optical potential. Based on a local density approximation, the double-folding model of the nucleus-nucleus optical potential has been extended to take into account consistently the rearrangement effect and energy dependence of the nuclear mean-field potential, using the modified CDM3Yn interactions. The extended double-folding model was applied to study the elastic 12C+12C and 16O+12C scattering at the refractive energies, where the Airy structure of the nuclear rainbow has been well established. The RT was found to affect significantly the real nucleus-nucleus optical potential at small internuclear distances, giving a potential strength close to that implied by the realistic optical model description of the Airy oscillation.
Doubled CO2 Effects on NO(y) in a Coupled 2D Model
NASA Technical Reports Server (NTRS)
Rosenfield, J. E.; Douglass, A. R.
1998-01-01
Changes in temperature and ozone have been the main focus of studies of the stratospheric impact of doubled CO2. Increased CO2 is expected to cool the stratosphere, which will result in increases in stratospheric ozone through temperature dependent loss rates. Less attention has been paid to changes in minor constituents which affect the O3 balance and which may provide additional feedbacks. Stratospheric NO(y) fields calculated using the GSFC 2D interactive chemistry-radiation-dynamics model show significant sensitivity to the model CO2. Modeled upper stratospheric NO(y) decreases by about 15% in response to CO2 doubling, mainly due to the temperature decrease calculated to result from increased cooling. The abundance of atomic nitrogen, N, increases because the rate of the strongly temperature dependent reaction N + O2 yields NO + O decreases at lower temperatures. Increased N leads to an increase in the loss of NO(y) which is controlled by the reaction N + NO yields N2 + O. The NO(y) reduction is shown to be sensitive to the NO photolysis rate. The decrease in the O3 loss rate due to the NO(y) changes is significant when compared to the decrease in the O3 loss rate due to the temperature changes.
Simulation of the conformation and dynamics of a double-helical model for DNA.
Huertas, M L; Navarro, S; Lopez Martinez, M C; García de la Torre, J
1997-01-01
We propose a partially flexible, double-helical model for describing the conformational and dynamic properties of DNA. In this model, each nucleotide is represented by one element (bead), and the known geometrical features of the double helix are incorporated in the equilibrium conformation. Each bead is connected to a few neighbor beads in both strands by means of stiff springs that maintain the connectivity but still allow for some extent of flexibility and internal motion. We have used Brownian dynamics simulation to sample the conformational space and monitor the overall and internal dynamics of short DNA pieces, with up to 20 basepairs. From Brownian trajectories, we calculate the dimensions of the helix and estimate its persistence length. We obtain translational diffusion coefficient and various rotational relaxation times, including both overall rotation and internal motion. Although we have not carried out a detailed parameterization of the model, the calculated properties agree rather well with experimental data available for those oligomers. Images FIGURE 3 PMID:9414226
Spectral density of generalized Wishart matrices and free multiplicative convolution
NASA Astrophysics Data System (ADS)
Młotkowski, Wojciech; Nowak, Maciej A.; Penson, Karol A.; Życzkowski, Karol
2015-07-01
We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W =X X† , where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP⊠s, which for an integer s yield Fuss-Catalan distributions corresponding to a product of s -independent square random matrices, X =X1⋯Xs . New formulas for the level densities are derived for s =3 and s =1 /3 . Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.
Spectral density of generalized Wishart matrices and free multiplicative convolution.
Młotkowski, Wojciech; Nowak, Maciej A; Penson, Karol A; Życzkowski, Karol
2015-07-01
We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W=XX(†), where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP(⊠s), which for an integer s yield Fuss-Catalan distributions corresponding to a product of s-independent square random matrices, X=X(1)⋯X(s). New formulas for the level densities are derived for s=3 and s=1/3. Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.
a Convolutional Network for Semantic Facade Segmentation and Interpretation
NASA Astrophysics Data System (ADS)
Schmitz, Matthias; Mayer, Helmut
2016-06-01
In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.
Image Super-Resolution Using Deep Convolutional Networks.
Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou
2016-02-01
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. PMID:26761735
Image Super-Resolution Using Deep Convolutional Networks.
Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou
2016-02-01
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.
A new computational decoding complexity measure of convolutional codes
NASA Astrophysics Data System (ADS)
Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.
2014-12-01
This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.
NASA Astrophysics Data System (ADS)
Alam, Md Mushfiqul; Nguyen, Tuan D.; Hagan, Martin T.; Chandler, Damon M.
2015-09-01
Fast prediction models of local distortion visibility and local quality can potentially make modern spatiotemporally adaptive coding schemes feasible for real-time applications. In this paper, a fast convolutional-neural- network based quantization strategy for HEVC is proposed. Local artifact visibility is predicted via a network trained on data derived from our improved contrast gain control model. The contrast gain control model was trained on our recent database of local distortion visibility in natural scenes [Alam et al. JOV 2014]. Further- more, a structural facilitation model was proposed to capture effects of recognizable structures on distortion visibility via the contrast gain control model. Our results provide on average 11% improvements in compression efficiency for spatial luma channel of HEVC while requiring almost one hundredth of the computational time of an equivalent gain control model. Our work opens the doors for similar techniques which may work for different forthcoming compression standards.
Double-layer parallelization for hydrological model calibration on HPC systems
NASA Astrophysics Data System (ADS)
Zhang, Ang; Li, Tiejian; Si, Yuan; Liu, Ronghua; Shi, Haiyun; Li, Xiang; Li, Jiaye; Wu, Xia
2016-04-01
Large-scale problems that demand high precision have remarkably increased the computational time of numerical simulation models. Therefore, the parallelization of models has been widely implemented in recent years. However, computing time remains a major challenge when a large model is calibrated using optimization techniques. To overcome this difficulty, we proposed a double-layer parallel system for hydrological model calibration using high-performance computing (HPC) systems. The lower-layer parallelism is achieved using a hydrological model, the Digital Yellow River Integrated Model, which was parallelized by decomposing river basins. The upper-layer parallelism is achieved by simultaneous hydrological simulations with different parameter combinations in the same generation of the genetic algorithm and is implemented using the job scheduling functions of an HPC system. The proposed system was applied to the upstream of the Qingjian River basin, a sub-basin of the middle Yellow River, to calibrate the model effectively by making full use of the computing resources in the HPC system and to investigate the model's behavior under various parameter combinations. This approach is applicable to most of the existing hydrology models for many applications.
Preliminary results from a four-working space, double-acting piston, Stirling engine controls model
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1980-01-01
A four working space, double acting piston, Stirling engine simulation is being developed for controls studies. The development method is to construct two simulations, one for detailed fluid behavior, and a second model with simple fluid behaviour but containing the four working space aspects and engine inertias, validate these models separately, then upgrade the four working space model by incorporating the detailed fluid behaviour model for all four working spaces. The single working space (SWS) model contains the detailed fluid dynamics. It has seven control volumes in which continuity, energy, and pressure loss effects are simulated. Comparison of the SWS model with experimental data shows reasonable agreement in net power versus speed characteristics for various mean pressure levels in the working space. The four working space (FWS) model was built to observe the behaviour of the whole engine. The drive dynamics and vehicle inertia effects are simulated. To reduce calculation time, only three volumes are used in each working space and the gas temperature are fixed (no energy equation). Comparison of the FWS model predicted power with experimental data shows reasonable agreement. Since all four working spaces are simulated, the unique capabilities of the model are exercised to look at working fluid supply transients, short circuit transients, and piston ring leakage effects.
Impact of stray charge on interconnect wire via probability model of double-dot system
NASA Astrophysics Data System (ADS)
Xiangye, Chen; Li, Cai; Qiang, Zeng; Xinqiao, Wang
2016-02-01
The behavior of quantum cellular automata (QCA) under the influence of a stray charge is quantified. A new time-independent switching paradigm, a probability model of the double-dot system, is developed. Superiority in releasing the calculation operation is presented by the probability model compared to previous stray charge analysis utilizing ICHA or full-basis calculation. Simulation results illustrate that there is a 186-nm-wide region surrounding a QCA wire where a stray charge will cause the target cell to switch unsuccessfully. The failure is exhibited by two new states' dominating the target cell. Therefore, a bistable saturation model is no longer applicable for stray charge analysis. Project supported by the National Natural Science Foundation of China (No. 61172043) and the Key Program of Shaanxi Provincial Natural Science for Basic Research (No. 2011JZ015).
Shell-Model Calculations of Two-Nucleon Tansfer Related to Double Beta Decay
NASA Astrophysics Data System (ADS)
Brown, Alex
2013-10-01
I will discuss theoretical results for two-nucleon transfer cross sections for nuclei in the regions of 48Ca, 76Ge and 136Xe of interest for testing the wavefuntions used for the nuclear matrix elements in double-beta decay. Various reaction models are used. A simple cluster transfer model gives relative cross sections. Thompson's code Fresco with direct and sequential transfer is used for absolute cross sections. Wavefunctions are obtained in large-basis proton-neutron coupled model spaces with the code NuShellX with realistic effecive Hamiltonians such as those used for the recent results for 136Xe [M. Horoi and B. A. Brown, Phys. Rev. Lett. 110, 222502 (2013)]. I acknowledge support from NSF grant PHY-1068217.
A double hit model for the distribution of time to AIDS onset
NASA Astrophysics Data System (ADS)
Chillale, Nagaraja Rao
2013-09-01
Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.
Simplified model of mean double step (MDS) in human body movement
NASA Astrophysics Data System (ADS)
Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando
In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.
Convolution using guided acoustooptical interaction in thin-film waveguides
NASA Technical Reports Server (NTRS)
Chang, W. S. C.; Becker, R. A.; Tsai, C. S.; Yao, I. W.
1977-01-01
Interaction of two antiparallel acoustic surface waves (ASW) with an optical guided wave has been investigated theoretically as well as experimentally to obtain the convolution of two ASW signals. The maximum time-bandwidth product that can be achieved by such a convolver is shown to be of the order of 1000 or more. The maximum dynamic range can be as large as 83 dB.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Face Detection Using GPU-Based Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Nasse, Fabian; Thurau, Christian; Fink, Gernot A.
In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.
Fine-grained representation learning in convolutional autoencoders
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie
2016-03-01
Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.
Automatic localization of vertebrae based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie
2015-03-01
Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.
Shen, Dongkai; Zhang, Qian
2016-01-01
In recent studies on the dynamic characteristics of ventilation system, it was considered that human had only one lung, and the coupling effect of double lungs on the air flow can not be illustrated, which has been in regard to be vital to life support of patients. In this article, to illustrate coupling effect of double lungs on flow dynamics of mechanical ventilation system, a mathematical model of a mechanical ventilation system, which consists of double lungs and a bi-level positive airway pressure (BIPAP) controlled ventilator, was proposed. To verify the mathematical model, a prototype of BIPAP system with a double-lung simulators and a BIPAP ventilator was set up for experimental study. Lastly, the study on the influences of key parameters of BIPAP system on dynamic characteristics was carried out. The study can be referred to in the development of research on BIPAP ventilation treatment and real respiratory diagnostics.
Shen, Dongkai; Zhang, Qian
2016-01-01
In recent studies on the dynamic characteristics of ventilation system, it was considered that human had only one lung, and the coupling effect of double lungs on the air flow can not be illustrated, which has been in regard to be vital to life support of patients. In this article, to illustrate coupling effect of double lungs on flow dynamics of mechanical ventilation system, a mathematical model of a mechanical ventilation system, which consists of double lungs and a bi-level positive airway pressure (BIPAP) controlled ventilator, was proposed. To verify the mathematical model, a prototype of BIPAP system with a double-lung simulators and a BIPAP ventilator was set up for experimental study. Lastly, the study on the influences of key parameters of BIPAP system on dynamic characteristics was carried out. The study can be referred to in the development of research on BIPAP ventilation treatment and real respiratory diagnostics. PMID:27660646
Accelerating protein docking in ZDOCK using an advanced 3D convolution library.
Pierce, Brian G; Hourai, Yuichiro; Weng, Zhiping
2011-01-01
Computational prediction of the 3D structures of molecular interactions is a challenging area, often requiring significant computational resources to produce structural predictions with atomic-level accuracy. This can be particularly burdensome when modeling large sets of interactions, macromolecular assemblies, or interactions between flexible proteins. We previously developed a protein docking program, ZDOCK, which uses a fast Fourier transform to perform a 3D search of the spatial degrees of freedom between two molecules. By utilizing a pairwise statistical potential in the ZDOCK scoring function, there were notable gains in docking accuracy over previous versions, but this improvement in accuracy came at a substantial computational cost. In this study, we incorporated a recently developed 3D convolution library into ZDOCK, and additionally modified ZDOCK to dynamically orient the input proteins for more efficient convolution. These modifications resulted in an average of over 8.5-fold improvement in running time when tested on 176 cases in a newly released protein docking benchmark, as well as substantially less memory usage, with no loss in docking accuracy. We also applied these improvements to a previous version of ZDOCK that uses a simpler non-pairwise atomic potential, yielding an average speed improvement of over 5-fold on the docking benchmark, while maintaining predictive success. This permits the utilization of ZDOCK for more intensive tasks such as docking flexible molecules and modeling of interactomes, and can be run more readily by those with limited computational resources. PMID:21949741
A three-dimensional statistical mechanical model of folding double-stranded chain molecules
NASA Astrophysics Data System (ADS)
Zhang, Wenbing; Chen, Shi-Jie
2001-05-01
Based on a graphical representation of intrachain contacts, we have developed a new three-dimensional model for the statistical mechanics of double-stranded chain molecules. The theory has been tested and validated for the cubic lattice chain conformations. The statistical mechanical model can be applied to the equilibrium folding thermodynamics of a large class of chain molecules, including protein β-hairpin conformations and RNA secondary structures. The application of a previously developed two-dimensional model to RNA secondary structure folding thermodynamics generally overestimates the breadth of the melting curves [S-J. Chen and K. A. Dill, Proc. Natl. Acad. Sci. U.S.A. 97, 646 (2000)], suggesting an underestimation for the sharpness of the conformational transitions. In this work, we show that the new three-dimensional model gives much sharper melting curves than the two-dimensional model. We believe that the new three-dimensional model may give much improved predictions for the thermodynamic properties of RNA conformational changes than the previous two-dimensional model.
Verilog-A implementation of a double-gate junctionless compact model for DC circuit simulations
NASA Astrophysics Data System (ADS)
Alvarado, J.; Flores, P.; Romero, S.; Ávila-Herrera, F.; González, V.; Soto-Cruz, B. S.; Cerdeira, A.
2016-07-01
A physically based model of the double-gate juntionless transistor which is capable of describing accumulation and depletion regions is implemented in Verilog-A in order to perform DC circuit simulations. Analytical description of the difference of potentials between the center and the surface of the silicon layer allows the determination of the mobile charges. Furthermore, mobility degradation, series resistance, as well as threshold voltage roll-off, drain saturation voltage, channel shortening and velocity saturation are also considered. In order to provide this model to all of the community, the implementation of this model is performed in Ngspice, which is a free circuit simulation with an ADMS interface to integrate Verilog-A models. Validation of the model implementation is done through 2D numerical simulations of transistors with 1 μ {{m}} and 40 {{nm}} silicon channel length and 1 × 1019 or 5× {10}18 {{{cm}}}-3 doping concentration of the silicon layer with 10 and 15 {{nm}} silicon thickness. Good agreement between the numerical simulated behavior and model implementation is obtained, where only eight model parameters are used.
Development of kineto-dynamic quarter-car model for synthesis of a double wishbone suspension
NASA Astrophysics Data System (ADS)
Balike, K. P.; Rakheja, S.; Stiharu, I.
2011-02-01
Linear or nonlinear 2-degrees of freedom (DOF) quarter-car models have been widely used to study the conflicting dynamic performances of a vehicle suspension such as ride quality, road holding and rattle space requirements. Such models, however, cannot account for contributions due to suspension kinematics. Considering the proven simplicity and effectiveness of a quarter-car model for such analyses, this article presents the formulation of a comprehensive kineto-dynamic quarter-car model to study the kinematic and dynamic properties of a linkage suspension, and influences of linkage geometry on selected performance measures. An in-plane 2-DOF model was formulated incorporating the kinematics of a double wishbone suspension comprising an upper control arm, a lower control arm and a strut mounted on the lower control arm. The equivalent suspension and damping rates of the suspension model are analytically derived that could be employed in a conventional quarter-car model. The dynamic responses of the proposed model were evaluated under harmonic and bump/pothole excitations, idealised by positive/negative rounded pulse displacement and compared with those of the linear quarter-car model to illustrate the contributions due to suspension kinematics. The kineto-dynamic model revealed considerable variations in the wheel and damping rates, camber and wheel-track. Owing to the asymmetric kinematic behaviour of the suspension system, the dynamic responses of the kineto-dynamic model were observed to be considerably asymmetric about the equilibrium. The proposed kineto-dynamic model was subsequently applied to study the influences of links geometry in an attempt to seek reduced suspension lateral packaging space without compromising the kinematic and dynamic performances.
A double-layer based model of ion confinement in electron cyclotron resonance ion source
Mascali, D. Neri, L.; Celona, L.; Castro, G.; Gammino, S.; Ciavola, G.; Torrisi, G.; Sorbello, G.
2014-02-15
The paper proposes a new model of ion confinement in ECRIS, which can be easily generalized to any magnetic configuration characterized by closed magnetic surfaces. Traditionally, ion confinement in B-min configurations is ascribed to a negative potential dip due to superhot electrons, adiabatically confined by the magneto-static field. However, kinetic simulations including RF heating affected by cavity modes structures indicate that high energy electrons populate just a thin slab overlapping the ECR layer, while their density drops down of more than one order of magnitude outside. Ions, instead, diffuse across the electron layer due to their high collisionality. This is the proper physical condition to establish a double-layer (DL) configuration which self-consistently originates a potential barrier; this “barrier” confines the ions inside the plasma core surrounded by the ECR surface. The paper will describe a simplified ion confinement model based on plasma density non-homogeneity and DL formation.
2007-07-09
Version 02 PRECO-2006 is a two-component exciton model code for the calculation of double differential cross sections of light particle nuclear reactions. PRECO calculates the emission of light particles (A = 1 to 4) from nuclear reactions induced by light particles on a wide variety of target nuclei. Their distribution in both energy and angle is calculated. Since it currently only considers the emission of up to two particles in any given reaction, it ismore » most useful for incident energies of 14 to 30 MeV when used as a stand-alone code. However, the preequilibrium calculations are valid up to at least around 100 MeV, and these can be used as input for more complete evaporation calculations, such as are performed in a Hauser-Feshbach model code. Finally, the production cross sections for specific product nuclides can be obtained« less
Tectonic and petrologic evolution of the Western Mediterranean: the double polarity subduction model
NASA Astrophysics Data System (ADS)
Melchiorre, Massimiliano; Vergés, Jaume; Fernàndez, Manel; Torné, Montserrat; Casciello, Emilio
2016-04-01
The geochemical composition of the mantle beneath the Mediterranean area is extremely heterogeneous. This feature results in volcanic products whose geochemical features in some cases do not correspond to the geodynamic environment in which they are sampled and that is observed at present day. The subduction-related models that have been developed during the last decades to explain the evolution of the Western Mediterranean are mainly based on geologic and seismologic evidences, as well as petrography and age of exhumation of the metamorphic units that compose the inner parts of the different arcs. Except few cases, most of these models are poorly constrained from a petrologic point of view. Usually the volcanic activity that affected the Mediterranean area since Oligocene has been only used as a corollary, and not as a key constrain. This choice is strictly related to the great geochemical variability of the volcanic products erupted in the Western Mediterranean, due to events of long-term recycling affecting the mantle beneath the Mediterranean since the Variscan Orogeny, together with depletion episodes due to partial melting. We consider an evolutionary scenario for the Western Mediterranean based on a double polarity subduction model according to which two opposite slabs separated by a transform fault of the original Jurassic rift operated beneath the Western and Central Mediterranean. Our aim has been to reconstruct the evolution of the Western Mediterranean since the Oligocene considering the volcanic activity that affected this area since ~30 Ma and supporting the double polarity subduction model with the petrology of the erupted rocks.
The dynamics of double slab subduction from numerical and semi-analytic models
NASA Astrophysics Data System (ADS)
Holt, A.; Royden, L.; Becker, T. W.
2015-12-01
Regional interactions between multiple subducting slabs have been proposed to explain enigmatic slab kinematics in a number of subduction zones, a pertinent example being the rapid pre-collisional plate convergence of India and Eurasia. However, dynamically consistent 3-D numerical models of double subduction have yet to be explored, and so the physics of such double slab systems remain poorly understood. Here we build on the comparison of a fully numerical finite element model (CitcomCU) and a time-dependent semi-analytic subduction models (FAST) presented for single subduction systems (Royden et. al., 2015 AGU Fall Abstract) to explore how subducting slab kinematics, particularly trench and plate motions, can be affected by the presence of an additional slab, with all of the possible slab dip direction permutations. A second subducting slab gives rise to a more complex dynamic pressure and mantle flow fields, and an additional slab pull force that is transmitted across the subduction zone interface. While the general relationships among plate velocity, trench velocity, asthenospheric pressure drop, and plate coupling modes are similar to those observed for the single slab case, we find that multiple subducting slabs can interact with each other and indeed induce slab kinematics that deviate significantly from those observed for the equivalent single slab models. References Jagoutz, O., Royden, L. H., Holt, A. F. & Becker, T. W., 2015, Nature Geo., 8, 10.1038/NGEO2418. Moresi, L. N. & Gurnis, M., 1996, Earth Planet. Sci. Lett., 138, 15-28. Royden, L. H. & Husson, L., 2006, Geophys. J. Int. 167, 881-905. Zhong, S., 2006, J. Geophys. Res., 111, doi: 10.1029/2005JB003972.
Tsehaye, Iyob; Jones, Michael L.; Irwin, Brian J.; Fielder, David G.; Breck, James E.; Luukkonen, David R.
2015-01-01
The proliferation of double-crested cormorants (DCCOs; Phalacrocorax auritus) in North America has raised concerns over their potential negative impacts on game, cultured and forage fishes, island and terrestrial resources, and other colonial water birds, leading to increased public demands to reduce their abundance. By combining fish surplus production and bird functional feeding response models, we developed a deterministic predictive model representing bird–fish interactions to inform an adaptive management process for the control of DCCOs in multiple colonies in Michigan. Comparisons of model predictions with observations of changes in DCCO numbers under management measures implemented from 2004 to 2012 suggested that our relatively simple model was able to accurately reconstruct past DCCO population dynamics. These comparisons helped discriminate among alternative parameterizations of demographic processes that were poorly known, especially site fidelity. Using sensitivity analysis, we also identified remaining critical uncertainties (mainly in the spatial distributions of fish vs. DCCO feeding areas) that can be used to prioritize future research and monitoring needs. Model forecasts suggested that continuation of existing control efforts would be sufficient to achieve long-term DCCO control targets in Michigan and that DCCO control may be necessary to achieve management goals for some DCCO-impacted fisheries in the state. Finally, our model can be extended by accounting for parametric or ecological uncertainty and including more complex assumptions on DCCO–fish interactions as part of the adaptive management process.
Double point source W-phase inversion: Real-time implementation and automated model selection
NASA Astrophysics Data System (ADS)
Nealy, Jennifer L.; Hayes, Gavin P.
2015-12-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-18
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. PMID:26797612
ARKCoS: artifact-suppressed accelerated radial kernel convolution on the sphere
NASA Astrophysics Data System (ADS)
Elsner, F.; Wandelt, B. D.
2011-08-01
We describe a hybrid Fourier/direct space convolution algorithm for compact radial (azimuthally symmetric) kernels on the sphere. For high resolution maps covering a large fraction of the sky, our implementation takes advantage of the inexpensive massive parallelism afforded by consumer graphics processing units (GPUs). Its applications include modeling of instrumental beam shapes in terms of compact kernels, computation of fine-scale wavelet transformations, and optimal filtering for the detection of point sources. Our algorithm works for any pixelization where pixels are grouped into isolatitude rings. Even for kernels that are not bandwidth-limited, ringing features are completely absent on an ECP grid. We demonstrate that they can be highly suppressed on the popular HEALPix pixelization, for which we develop a freely available implementation of the algorithm. As an example application, we show that running on a high-end consumer graphics card our method speeds up beam convolution for simulations of a characteristic Planck high frequency instrument channel by two orders of magnitude compared to the commonly used HEALPix implementation on one CPU core, while typically maintaining a fractional RMS accuracy of about 1 part in 105.
Region-Based Convolutional Networks for Accurate Object Detection and Segmentation.
Girshick, Ross; Donahue, Jeff; Darrell, Trevor; Malik, Jitendra
2016-01-01
Object detection performance, as measured on the canonical PASCAL VOC Challenge datasets, plateaued in the final years of the competition. The best-performing methods were complex ensemble systems that typically combined multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 50 percent relative to the previous best result on VOC 2012-achieving a mAP of 62.4 percent. Our approach combines two ideas: (1) one can apply high-capacity convolutional networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data are scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, boosts performance significantly. Since we combine region proposals with CNNs, we call the resulting model an R-CNN or Region-based Convolutional Network. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612
A 2-D semi-analytical model of double-gate tunnel field-effect transistor
NASA Astrophysics Data System (ADS)
Huifang, Xu; Yuehua, Dai; Ning, Li; Jianbin, Xu
2015-05-01
A 2-D semi-analytical model of double gate (DG) tunneling field-effect transistor (TFET) is proposed. By aid of introducing two rectangular sources located in the gate dielectric layer and the channel, the 2-D Poisson equation is solved by using a semi-analytical method combined with an eigenfunction expansion method. The expression of the surface potential is obtained, which is a special function for the infinite series expressions. The influence of the mobile charges on the potential profile is taken into account in the proposed model. On the basis of the potential profile, the shortest tunneling length and the average electrical field can be derived, and the drain current is then constructed by using Kane's model. In particular, the changes of the tunneling parameters Ak and Bk influenced by the drain—source voltage are also incorporated in the predicted model. The proposed model shows a good agreement with TCAD simulation results under different drain—source voltages, silicon film thicknesses, gate dielectric layer thicknesses, and gate dielectric layer constants. Therefore, it is useful to optimize the DG TFET and this provides a physical insight for circuit level design. Project supported by the National Natural Science Foundation of China (No. 61376106) and the Graduate Innovation Fund of Anhui University.
The convoluted evolution of snail chirality
NASA Astrophysics Data System (ADS)
Schilthuizen, M.; Davison, A.
2005-11-01
The direction that a snail (Mollusca: Gastropoda) coils, whether dextral (right-handed) or sinistral (left-handed), originates in early development but is most easily observed in the shell form of the adult. Here, we review recent progress in understanding snail chirality from genetic, developmental and ecological perspectives. In the few species that have been characterized, chirality is determined by a single genetic locus with delayed inheritance, which means that the genotype is expressed in the mother's offspring. Although research lags behind the studies of asymmetry in the mouse and nematode, attempts to isolate the loci involved in snail chirality have begun, with the final aim of understanding how the axis of left-right asymmetry is established. In nature, most snail taxa (>90%) are dextral, but sinistrality is known from mutant individuals, populations within dextral species, entirely sinistral species, genera and even families. Ordinarily, it is expected that strong frequency-dependent selection should act against the establishment of new chiral types because the chiral minority have difficulty finding a suitable mating partner (their genitalia are on the ‘wrong’ side). Mixed populations should therefore not persist. Intriguingly, however, a very few land snail species, notably the subgenus Amphidromus sensu stricto, not only appear to mate randomly between different chiral types, but also have a stable, within-population chiral dimorphism, which suggests the involvement of a balancing factor. At the other end of the spectrum, in many species, different chiral types are unable to mate and so could be reproductively isolated from one another. However, while empirical data, models and simulations have indicated that chiral reversal must sometimes occur, it is rarely likely to lead to so-called ‘single-gene’ speciation. Nevertheless, chiral reversal could still be a contributing factor to speciation (or to divergence after speciation) when
A GENERAL CIRCULATION MODEL FOR GASEOUS EXOPLANETS WITH DOUBLE-GRAY RADIATIVE TRANSFER
Rauscher, Emily; Menou, Kristen
2012-05-10
We present a new version of our code for modeling the atmospheric circulation on gaseous exoplanets, now employing a 'double-gray' radiative transfer scheme, which self-consistently solves for fluxes and heating throughout the atmosphere, including the emerging (observable) infrared flux. We separate the radiation into infrared and optical components, each with its own absorption coefficient, and solve standard two-stream radiative transfer equations. We use a constant optical absorption coefficient, while the infrared coefficient can scale as a power law with pressure; however, for simplicity, the results shown in this paper use a constant infrared coefficient. Here we describe our new code in detail and demonstrate its utility by presenting a generic hot Jupiter model. We discuss issues related to modeling the deepest pressures of the atmosphere and describe our use of the diffusion approximation for radiative fluxes at high optical depths. In addition, we present new models using a simple form for magnetic drag on the atmosphere. We calculate emitted thermal phase curves and find that our drag-free model has the brightest region of the atmosphere offset by {approx}12 Degree-Sign from the substellar point and a minimum flux that is 17% of the maximum, while the model with the strongest magnetic drag has an offset of only {approx}2 Degree-Sign and a ratio of 13%. Finally, we calculate rates of numerical loss of kinetic energy at {approx}15% for every model except for our strong-drag model, where there is no measurable loss; we speculate that this is due to the much decreased wind speeds in that model.
Bioprinting of 3D Convoluted Renal Proximal Tubules on Perfusable Chips
Homan, Kimberly A.; Kolesky, David B.; Skylar-Scott, Mark A.; Herrmann, Jessica; Obuobi, Humphrey; Moisan, Annie; Lewis, Jennifer A.
2016-01-01
Three-dimensional models of kidney tissue that recapitulate human responses are needed for drug screening, disease modeling, and, ultimately, kidney organ engineering. Here, we report a bioprinting method for creating 3D human renal proximal tubules in vitro that are fully embedded within an extracellular matrix and housed in perfusable tissue chips, allowing them to be maintained for greater than two months. Their convoluted tubular architecture is circumscribed by proximal tubule epithelial cells and actively perfused through the open lumen. These engineered 3D proximal tubules on chip exhibit significantly enhanced epithelial morphology and functional properties relative to the same cells grown on 2D controls with or without perfusion. Upon introducing the nephrotoxin, Cyclosporine A, the epithelial barrier is disrupted in a dose-dependent manner. Our bioprinting method provides a new route for programmably fabricating advanced human kidney tissue models on demand. PMID:27725720
Bioprinting of 3D Convoluted Renal Proximal Tubules on Perfusable Chips
NASA Astrophysics Data System (ADS)
Homan, Kimberly A.; Kolesky, David B.; Skylar-Scott, Mark A.; Herrmann, Jessica; Obuobi, Humphrey; Moisan, Annie; Lewis, Jennifer A.
2016-10-01
Three-dimensional models of kidney tissue that recapitulate human responses are needed for drug screening, disease modeling, and, ultimately, kidney organ engineering. Here, we report a bioprinting method for creating 3D human renal proximal tubules in vitro that are fully embedded within an extracellular matrix and housed in perfusable tissue chips, allowing them to be maintained for greater than two months. Their convoluted tubular architecture is circumscribed by proximal tubule epithelial cells and actively perfused through the open lumen. These engineered 3D proximal tubules on chip exhibit significantly enhanced epithelial morphology and functional properties relative to the same cells grown on 2D controls with or without perfusion. Upon introducing the nephrotoxin, Cyclosporine A, the epithelial barrier is disrupted in a dose-dependent manner. Our bioprinting method provides a new route for programmably fabricating advanced human kidney tissue models on demand.
Sagers, Jason D; Leishman, Timothy W; Blotter, Jonathan D
2009-06-01
Low-frequency sound transmission has long plagued the sound isolation performance of lightweight partitions. Over the past 2 decades, researchers have investigated actively controlled structures to prevent sound transmission from a source space into a receiving space. An approach using active segmented partitions (ASPs) seeks to improve low-frequency sound isolation capabilities. An ASP is a partition which has been mechanically and acoustically segmented into a number of small individually controlled modules. This paper provides a theoretical and numerical development of a single ASP module configuration, wherein each panel of the double-panel structure is independently actuated and controlled by an analog feedback controller. A numerical model is developed to estimate frequency response functions for the purpose of controller design, to understand the effects of acoustic coupling between the panels, to predict the transmission loss of the module in both passive and active states, and to demonstrate that the proposed ASP module will produce bidirectional sound isolation.
Full coupled cluster singles, doubles and triples model for the description of electron correlation
Hoffmann, M.R.
1984-10-01
Equations for the determination of the cluster coefficients in a full coupled cluster theory involving single, double and triple cluster operators with respect to an independent particle reference, expressible as a single determinant of spin-orbitals, are derived. The resulting wave operator is full, or untruncated, consistant with the choice of cluster operator truncation and the requirements of the connected cluster theorem. A time-independent diagrammatic approach, based on second quantization and the Wick theorem, is employed. Final equations are presented that avoid the construction of rank three intermediary tensors. The model is seen to be a computationally viable, size-extensive, high-level description of electron correlation in small polyatomic molecules.
Wu, Xintian; Izmailyan, Nickolay
2015-01-01
The critical two-dimensional Ising model is studied with four types boundary conditions: free, fixed ferromagnetic, fixed antiferromagnetic, and fixed double antiferromagnetic. Using bond propagation algorithms with surface fields, we obtain the free energy, internal energy, and specific heat numerically on square lattices with a square shape and various combinations of the four types of boundary conditions. The calculations are carried out on the square lattices with size N×N and 30
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
S-model calculations for high-energy-electron-impact double ionization of helium
NASA Astrophysics Data System (ADS)
Gasaneo, G.; Mitnik, D. M.; Randazzo, J. M.; Ancarani, L. U.; Colavecchia, F. D.
2013-04-01
In this paper the double ionization of helium by high-energy electron impact is studied. The corresponding four-body Schrödinger equation is transformed into a set of driven equations containing successive orders in the projectile-target interaction. The transition amplitude obtained from the asymptotic limit of the first-order solution is shown to be equivalent to the familiar first Born approximation. The first-order driven equation is solved within a generalized Sturmian approach for an S-wave (e,3e) model process with high incident energy and small momentum transfer corresponding to published measurements. Two independent numerical implementations, one using spherical and the other hyperspherical coordinates, yield mutual agreement. From our ab initio solution, the transition amplitude is extracted, and single differential cross sections are calculated and could be taken as benchmark values to test other numerical methods in a previously unexplored energy domain.
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. PMID:24309506
Suzuki, Yasuyuki; Nomura, Taishin; Morasso, Pietro
2011-01-01
Recent debate about neural mechanisms for stabilizing human upright quiet stance focuses on whether the active and time delay neural feedback control generating muscle torque is continuous or intermittent. A single inverted pendulum controlled by the active torque actuating the ankle joint has often been used for the debate on the presumption of well-known ankle strategy hypothesis claiming that the upright quiet stance can be stabilized mostly by the ankle torque. However, detailed measurements are showing that the hip joint angle exhibits amount of fluctuations comparable with the ankle joint angle during natural postural sway. Here we analyze a double inverted pendulum model during human quiet stance to demonstrate that the conventional proportional and derivative delay feedback control, i.e., the continuous delay PD control with gains in the physiologically plausible range is far from adequate as the neural mechanism for stabilizing human upright quiet stance. PMID:22256061
Synthetic double-stranded RNA enhances airway inflammation and remodelling in a rat model of asthma.
Takayama, Satoshi; Tamaoka, Meiyo; Takayama, Koji; Okayasu, Kaori; Tsuchiya, Kimitake; Miyazaki, Yasunari; Sumi, Yuki; Martin, James G; Inase, Naohiko
2011-10-01
Respiratory viral infections are frequently associated with exacerbations of asthma. Double-stranded RNA (dsRNA) produced during viral infections may be one of the stimuli for exacerbation. We aimed to assess the potential effect of dsRNA on certain aspects of chronic asthma through the administration of polyinosine-polycytidylic acid (poly I:C), synthetic dsRNA, to a rat model of asthma. Brown Norway rats were sensitized to ovalbumin and challenged three times to evoke airway remodelling. The effect of poly I:C on the ovalbumin-induced airway inflammation and structural changes was assessed from bronchoalveolar lavage fluid and histological findings. The expression of cytokines and chemokines was evaluated by real-time quantitative reverse transcription PCR and ELISA. Ovalbumin-challenged animals showed an increased number of total cells and eosinophils in bronchoalveolar lavage fluid compared with PBS-challenged controls. Ovalbumin-challenged animals treated with poly I:C showed an increased number of total cells and neutrophils in bronchoalveolar lavage fluid compared with those without poly I:C treatment. Ovalbumin-challenged animals showed goblet cell hyperplasia, increased airway smooth muscle mass, and proliferation of both airway epithelial cells and airway smooth muscle cells. Treatment with poly I:C enhanced these structural changes. Among the cytokines and chemokines examined, the expression of interleukins 12 and 17 and of transforming growth factor-β(1) in ovalbumin-challenged animals treated with poly I:C was significantly increased compared with those of the other groups. Double-stranded RNA enhanced airway inflammation and remodelling in a rat model of bronchial asthma. These observations suggest that viral infections may promote airway remodelling.
Double Higgs boson production and decay in Randall-Sundrum model at hadron colliders
NASA Astrophysics Data System (ADS)
Zhang, Wen-Juan; Ma, Wen-Gan; Zhang, Ren-You; Li, Xiao-Zhou; Guo, Lei; Chen, Chong
2015-12-01
We investigate the double Higgs production and decay at the 14 TeV LHC and 33 TeV HE-LHC in both the standard model (SM) and Randall-Sundrum (RS) model. In our calculation we consider reasonably only the contribution of the lightest two Kaluza-Klein (KK) gravitons. We present the integrated cross sections and some kinematic distributions in both models. Our results show that the RS effect in the vicinities of MH H˜M1, M2 (the masses of the lightest two KK gravitons) or in the central Higgs rapidity region is quite significant, and can be extracted from the heavy SM background by imposing proper kinematic cuts on final particles. We also study the dependence of the cross section on the RS model parameters, the first KK graviton mass M1, and the effective coupling c0, and find that the RS effect is reduced obviously with the increment of M1 or decrement of c0.
Faster GPU-based convolutional gridding via thread coarsening
NASA Astrophysics Data System (ADS)
Merry, B.
2016-07-01
Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Convolution seal for transition duct in turbine system
Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray
2015-05-26
A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.
Convolution seal for transition duct in turbine system
Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray
2015-03-10
A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.
Convolutional neural networks for mammography mass lesion classification.
Arevalo, John; Gonzalez, Fabio A; Ramos-Pollan, Raul; Oliveira, Jose L; Guevara Lopez, Miguel Angel
2015-08-01
Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve. PMID:26736382
Convolutional neural networks for synthetic aperture radar classification
NASA Astrophysics Data System (ADS)
Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott
2016-05-01
For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.
A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution
Walker, D.W.
1992-03-01
This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.
Nursing research on a first aid model of double personnel for major burn patients.
Wu, Weiwei; Shi, Kai; Jin, Zhenghua; Liu, Shuang; Cai, Duo; Zhao, Jingchun; Chi, Cheng; Yu, Jiaao
2015-03-01
This study explored the effect of a first aid model employing two nurses on the efficient rescue operation time and the efficient resuscitation time for major burn patients. A two-nurse model of first aid was designed for major burn patients. The model includes a division of labor between the first aid nurses and the re-organization of emergency carts. The clinical effectiveness of the process was examined in a retrospective chart review of 156 cases of major burn patients, experiencing shock and low blood volume, who were admitted to the intensive care unit of the department of burn surgery between November 2009 and June 2013. Of the 156 major burn cases, 87 patients who received first aid using the double personnel model were assigned to the test group and the 69 patients who received first aid using the standard first aid model were assigned to the control group. The efficient rescue operation time and the efficient resuscitation time for the patients were compared between the two groups. Student's t tests were used to the compare the mean difference between the groups. Statistically significant differences between the two groups were found on both measures (P's < 0.05), with the test group having lower times than the control group. The efficient rescue operation time was 14.90 ± 3.31 min in the test group and 30.42 ± 5.65 min in the control group. The efficient resuscitation time was 7.4 ± 3.2 h in the test group and 9.5 ± 2.7 h in the control group. A two-nurse first aid model based on scientifically validated procedures and a reasonable division of labor can shorten the efficient rescue operation time and the efficient resuscitation time for major burn patients. Given these findings, the model appears to be worthy of clinical application.
Jones, Martin K.; Zhang, Lei; Catte, Andrea; Li, Ling; Oda, Michael N.; Ren, Gang; Segrest, Jere P.
2010-01-01
For several decades, the standard model for high density lipoprotein (HDL) particles reconstituted from apolipoprotein A-I (apoA-I) and phospholipid (apoA-I/HDL) has been a discoidal particle ∼100 Å in diameter and the thickness of a phospholipid bilayer. Recently, Wu et al. (Wu, Z., Gogonea, V., Lee, X., Wagner, M. A., Li, X. M., Huang, Y., Undurti, A., May, R. P., Haertlein, M., Moulin, M., Gutsche, I., Zaccai, G., Didonato, J. A., and Hazen, S. L. (2009) J. Biol. Chem. 284, 36605–36619) used small angle neutron scattering to develop a new model they termed double superhelix (DSH) apoA-I that is dramatically different from the standard model. Their model possesses an open helical shape that wraps around a prolate ellipsoidal type I hexagonal lyotropic liquid crystalline phase. Here, we used three independent approaches, molecular dynamics, EM tomography, and fluorescence resonance energy transfer spectroscopy (FRET) to assess the validity of the DSH model. (i) By using molecular dynamics, two different approaches, all-atom simulated annealing and coarse-grained simulation, show that initial ellipsoidal DSH particles rapidly collapse to discoidal bilayer structures. These results suggest that, compatible with current knowledge of lipid phase diagrams, apoA-I cannot stabilize hexagonal I phase particles of phospholipid. (ii) By using EM, two different approaches, negative stain and cryo-EM tomography, show that reconstituted apoA-I/HDL particles are discoidal in shape. (iii) By using FRET, reconstituted apoA-I/HDL particles show a 28–34-Å intermolecular separation between terminal domain residues 40 and 240, a distance that is incompatible with the dimensions of the DSH model. Therefore, we suggest that, although novel, the DSH model is energetically unfavorable and not likely to be correct. Rather, we conclude that all evidence supports the likelihood that reconstituted apoA-I/HDL particles, in general, are discoidal in shape. PMID:20974855
Mitra, S.; Rocha, G.; Gorski, K. M.; Lawrence, C. R.; Huffenberger, K. M.; Eriksen, H. K.; Ashdown, M. A. J. E-mail: graca@caltech.edu E-mail: Charles.R.Lawrence@jpl.nasa.gov E-mail: h.k.k.eriksen@astro.uio.no
2011-03-15
Precise measurement of the angular power spectrum of the cosmic microwave background (CMB) temperature and polarization anisotropy can tightly constrain many cosmological models and parameters. However, accurate measurements can only be realized in practice provided all major systematic effects have been taken into account. Beam asymmetry, coupled with the scan strategy, is a major source of systematic error in scanning CMB experiments such as Planck, the focus of our current interest. We envision Monte Carlo methods to rigorously study and account for the systematic effect of beams in CMB analysis. Toward that goal, we have developed a fast pixel space convolution method that can simulate sky maps observed by a scanning instrument, taking into account real beam shapes and scan strategy. The essence is to pre-compute the 'effective beams' using a computer code, 'Fast Effective Beam Convolution in Pixel space' (FEBeCoP), that we have developed for the Planck mission. The code computes effective beams given the focal plane beam characteristics of the Planck instrument and the full history of actual satellite pointing, and performs very fast convolution of sky signals using the effective beams. In this paper, we describe the algorithm and the computational scheme that has been implemented. We also outline a few applications of the effective beams in the precision analysis of Planck data, for characterizing the CMB anisotropy and for detecting and measuring properties of point sources.
Yang, Zhixin; Wang, Shaowei; Zhao, Moli; Li, Shucai; Zhang, Qiangyong
2013-01-01
The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer is studied when the fluid and solid phase are not in local thermal equilibrium. The modified Darcy model is used for the momentum equation and a two-field model is used for energy equation each representing the fluid and solid phases separately. The effect of thermal non-equilibrium on the onset of double diffusive convection is discussed. The critical Rayleigh number and the corresponding wave number for the exchange of stability and over-stability are obtained, and the onset criterion for stationary and oscillatory convection is derived analytically and discussed numerically.
NASA Astrophysics Data System (ADS)
Medina, Tait Runnfeldt
The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and
NASA Astrophysics Data System (ADS)
Chobanyan, E.; Ilić, M. M.; Notaroš, B. M.
2015-05-01
A novel double-higher-order entire-domain volume integral equation (VIE) technique for efficient analysis of electromagnetic structures with continuously inhomogeneous dielectric materials is presented. The technique takes advantage of large curved hexahedral discretization elements—enabled by double-higher-order modeling (higher-order modeling of both the geometry and the current)—in applications involving highly inhomogeneous dielectric bodies. Lagrange-type modeling of an arbitrary continuous variation of the equivalent complex permittivity of the dielectric throughout each VIE geometrical element is implemented, in place of piecewise homogeneous approximate models of the inhomogeneous structures. The technique combines the features of the previous double-higher-order piecewise homogeneous VIE method and continuously inhomogeneous finite element method (FEM). This appears to be the first implementation and demonstration of a VIE method with double-higher-order discretization elements and conformal modeling of inhomogeneous dielectric materials embedded within elements that are also higher (arbitrary) order (with arbitrary material-representation orders within each curved and large VIE element). The new technique is validated and evaluated by comparisons with a continuously inhomogeneous double-higher-order FEM technique, a piecewise homogeneous version of the double-higher-order VIE technique, and a commercial piecewise homogeneous FEM code. The examples include two real-world applications involving continuously inhomogeneous permittivity profiles: scattering from an egg-shaped melting hailstone and near-field analysis of a Luneburg lens, illuminated by a corrugated horn antenna. The results show that the new technique is more efficient and ensures considerable reductions in the number of unknowns and computational time when compared to the three alternative approaches.
A double-blind study of SB-220453 (Tonerbasat) in the glyceryltrinitrate (GTN) model of migraine.
Tvedskov, J F; Iversen, H K; Olesen, J
2004-10-01
The need for experimental migraine models increases as therapeutic options widen. In the present study, we investigated SB-220453 for efficacy in the glyceryltrinitrate (GTN) human experimental migraine model. SB-220453 is a novel benzopyran compound, which in animal models inhibits neurogenic inflammation, blocks propagation of spreading depression and inhibits trigeminal nerve ganglion stimulation-induced carotid vasodilatation. We included 15 patients with migraine without aura in a randomized double-blind crossover study. SB-220453 40 mg or placebo was followed by a 20-min GTN infusion. Headache, scored 0-10, was registered for 12 h, and fulfillment of International Headache Society (IHS) criteria was recorded until 24 h. Four subjects had a hypotensive episode after SB-220453 plus GTN but none after GTN alone. The reaction was unexpected, since animal models and previous human studies had shown no vascular or sympaticolytic activity with SB-220453. The study was terminated prematurely due to this interaction. GTN was consistent in producing headache and migraine that resembled the patients' usual spontaneous migraine. Nine patients had GTN on both study days. Peak headache score showed a trend towards reduction after SB-220453 compared with placebo (median 4 vs. 7, P = 0.15). However, no reduction was seen in the number of subjects experiencing delayed headache (8 vs. 8), number of subjects reporting migraine (6 vs. 8), migraine attacks fulfilling IHS criteria 1.1 or 1.7 (6 vs. 7) or IHS 1.1 alone (4 vs. 5). SB-220453 had no significant pre-emptive anti-migraine activity compared with placebo in this human model of migraine. Interaction between SB-220453 and GTN was discovered. This is important for the future development of the compound and underlines the usefulness of experimental migraine models.
Double cluster heads model for secure and accurate data fusion in wireless sensor networks.
Fu, Jun-Song; Liu, Yun
2015-01-19
Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy.
A unified model of coupled arc plasma and weld pool for double electrodes TIG welding
NASA Astrophysics Data System (ADS)
Wang, Xinxin; Fan, Ding; Huang, Jiankang; Huang, Yong
2014-07-01
A three-dimensional model containing tungsten electrodes, arc plasma and a weld pool is presented for double electrodes tungsten inert gas welding. The model is validated by available experimental data. The distributions of temperature, velocity and pressure of the coupled arc plasma are investigated. The current density, heat flux and shear stress over the weld pool are highlighted. The weld pool dynamic is described by taking into account buoyance, Lorentz force, surface tension and plasma drag force. The turbulent effect in the weld pool is also considered. It is found that the temperature and velocity distributions of the coupled arc are not rotationally symmetrical. A similar property is also shown by the arc pressure, current density and heat flux at the anode surface. The surface tension gradient is much larger than the plasma drag force and dominates the convective pattern in the weld pool, thus determining the weld penetration. The anodic heat flux and plasma drag force, as well as the surface tension gradient over the weld pool, determine the weld shape and size. In addition, provided the welding current through one electrode increases and that through the other decreases, keeping the total current unchanged, the coupled arc behaviour and weld pool dynamic change significantly, while the weld shape and size show little change. The results demonstrate the necessity of a unified model in the study of the arc plasma and weld pool.
Quan, Chanqin
2016-01-01
The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task. PMID:27493967
Hua, Lei; Quan, Chanqin
2016-01-01
The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task. PMID:27493967
Hua, Lei; Quan, Chanqin
2016-01-01
The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy. PMID:24710398
Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.
Dürr, Oliver; Sick, Beate
2016-10-01
Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.
Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas
2016-09-01
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor. PMID:26540673
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.
Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas
2016-09-01
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.
A Mathematical Motivation for Complex-Valued Convolutional Networks.
Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur
2016-05-01
A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.
Enhancing Neutron Beam Production with a Convoluted Moderator
Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut
2014-10-01
We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.
The polarization model for hydration/double layer interactions: the role of the electrolyte ions.
Manciu, Marian; Ruckenstein, Eli
2004-12-31
The interactions between hydrophilic surfaces in water cannot be always explained on the basis of the traditional Derjaguin-Landau-Verwey-Overbeek (DLVO) theory, and an additional repulsion, the "hydration force" is required to accommodate the experimental data. While this force is in general associated with the organization of water in the vicinity of the surface, different models for the hydration were typically required to explain different experiments. In this article, it is shown that the polarization-model for the double layer/hydration proposed by the authors can explain both (i) the repulsion between neutral lipid bilayers, with a short decay length ( approximately 2 A), which is almost independent of the electrolyte concentration, and, at the same time, (ii) the repulsion between weakly charged mica surfaces, with a longer decay length ( approximately 10 A), exhibiting not only a dependence on the ionic strength, but also strong ion-specific effects. The model, which was previously employed to explain the restabilization of protein-covered latex particles at high ionic strengths and the existence of a long-range repulsion between the apoferritin molecules at moderate ionic strengths, is extended to account for the additional interactions between ions and surfaces, not included in the mean field electrical potential. The effect of the disorder in the water structure on the dipole correlation length is examined and the conditions under which the results of the polarization model are qualitatively similar to those obtained by the traditional theory via parameter fitting are emphasized. However, there are conditions under which the polarization model predicts results that cannot be recovered by the traditional theory via parameter fitting.
Slow rise and partial eruption of a double-decker filament. II. A double flux rope model
Kliem, Bernhard; Török, Tibor; Titov, Viacheslav S.; Lionello, Roberto; Linker, Jon A.; Liu, Rui; Liu, Chang; Wang, Haimin
2014-09-10
Force-free equilibria containing two vertically arranged magnetic flux ropes of like chirality and current direction are considered as a model for split filaments/prominences and filament-sigmoid systems. Such equilibria are constructed analytically through an extension of the methods developed in Titov and Démoulin and numerically through an evolutionary sequence including shear flows, flux emergence, and flux cancellation in the photospheric boundary. It is demonstrated that the analytical equilibria are stable if an external toroidal (shear) field component exceeding a threshold value is included. If this component decreases sufficiently, then both flux ropes turn unstable for conditions typical of solar active regions, with the lower rope typically becoming unstable first. Either both flux ropes erupt upward, or only the upper rope erupts while the lower rope reconnects with the ambient flux low in the corona and is destroyed. However, for shear field strengths staying somewhat above the threshold value, the configuration also admits evolutions which lead to partial eruptions with only the upper flux rope becoming unstable and the lower one remaining in place. This can be triggered by a transfer of flux and current from the lower to the upper rope, as suggested by the observations of a split filament in Paper I. It can also result from tether-cutting reconnection with the ambient flux at the X-type structure between the flux ropes, which similarly influences their stability properties in opposite ways. This is demonstrated for the numerically constructed equilibrium.
Symmetry-adapted digital modeling II. The double-helix B-DNA.
Janner, A
2016-05-01
The positions of phosphorus in B-DNA have the remarkable property of occurring (in axial projection) at well defined points in the three-dimensional space of a projected five-dimensional decagonal lattice, subdividing according to the golden mean ratio τ:1:τ [with τ = (1+\\sqrt {5})/2] the edges of an enclosing decagon. The corresponding planar integral indices n1, n2, n3, n4 (which are lattice point coordinates) are extended to include the axial index n5 as well, defined for each P position of the double helix with respect to the single decagonal lattice ΛP(aP, cP) with aP = 2.222 Å and cP = 0.676 Å. A finer decagonal lattice Λ(a, c), with a = aP/6 and c = cP, together with a selection of lattice points for each nucleotide with a given indexed P position (so as to define a discrete set in three dimensions) permits the indexing of the atomic positions of the B-DNA d(AGTCAGTCAG) derived by M. J. P. van Dongen. This is done for both DNA strands and the single lattice Λ. Considered first is the sugar-phosphate subsystem, and then each nucleobase guanine, adenine, cytosine and thymine. One gets in this way a digital modeling of d(AGTCAGTCAG) in a one-to-one correspondence between atomic and indexed positions and a maximal deviation of about 0.6 Å (for the value of the lattice parameters given above). It is shown how to get a digital modeling of the B-DNA double helix for any given code. Finally, a short discussion indicates how this procedure can be extended to derive coarse-grained B-DNA models. An example is given with a reduction factor of about 2 in the number of atomic positions. A few remarks about the wider interest of this investigation and possible future developments conclude the paper. PMID:27126108
Osmotic pressure of ionic liquids in an electric double layer: Prediction based on a continuum model
NASA Astrophysics Data System (ADS)
Moon, Gi Jong; Ahn, Myung Mo; Kang, In Seok
2015-12-01
An analysis has been performed for the osmotic pressure of ionic liquids in the electric double layer (EDL). By using the electromechanical approach, we first derive a differential equation that is valid for computing the osmotic pressure in the continuum limit of any incompressible fluid in EDL. Then a specific model for ionic liquids proposed by Bazant et al. [M. Z. Bazant, B. D. Storey, and A. A. Kornyshev, Phys. Rev. Lett. 106, 046102 (2011), 10.1103/PhysRevLett.106.046102] is adopted for more detailed computation of the osmotic pressure. Ionic liquids are characterized by the correlation and the steric effects of ions and their effects are analyzed. In the low voltage cases, the correlation effect is dominant and the problem becomes linear. For this low voltage limit, a closed form formula is derived for predicting the osmotic pressure in EDL with no overlapping. It is found that the osmotic pressure decreases as the correlation effect increases. The osmotic pressures at the nanoslit surface and nanoslit centerline are also obtained for the low voltage limit. For the cases of moderately high voltage with high correlation factor, approximate formulas are derived for estimating osmotic pressure values based on the concept of a condensed layer near the electrode. In order to corroborate the results predicted by analytical studies, the full nonlinear model has been solved numerically.
Probing flavor models with ^{ {76}}Ge-based experiments on neutrinoless double-β decay
NASA Astrophysics Data System (ADS)
Agostini, Matteo; Merle, Alexander; Zuber, Kai
2016-04-01
The physics impact of a staged approach for double-β decay experiments based on ^{ {76}}Ge is studied. The scenario considered relies on realistic time schedules envisioned by the Gerda and the Majorana collaborations, which are jointly working towards the realization of a future larger scale ^{ {76}}Ge experiment. Intermediate stages of the experiments are conceived to perform quasi background-free measurements, and different data sets can be reliably combined to maximize the physics outcome. The sensitivity for such a global analysis is presented, with focus on how neutrino flavor models can be probed already with preliminary phases of the experiments. The synergy between theory and experiment yields strong benefits for both sides: the model predictions can be used to sensibly plan the experimental stages, and results from intermediate stages can be used to constrain whole groups of theoretical scenarios. This strategy clearly generates added value to the experimental efforts, while at the same time it allows to achieve valuable physics results as early as possible.
Indo-Pacific ENSO modes in a double-basin Zebiak-Cane model
NASA Astrophysics Data System (ADS)
Wieners, Claudia; de Ruijter, Will; Dijkstra, Henk
2016-04-01
We study Indo-Pacific interactions on ENSO timescales in a double-basin version of the Zebiak-Cane ENSO model, employing both time integrations and bifurcation analysis (continuation methods). The model contains two oceans (the Indian and Pacific Ocean) separated by a meridional wall. Interaction between the basins is possible via the atmosphere overlaying both basins. We focus on the effect of the Indian Ocean (both its mean state and its variability) on ENSO stability. In addition, inspired by analysis of observational data (Wieners et al, Coherent tropical Indo-Pacific interannual climate variability, in review), we investigate the effect of state-dependent atmospheric noise. Preliminary results include the following: 1) The background state of the Indian Ocean stabilises the Pacific ENSO (i.e. the Hopf bifurcation is shifted to higher values of the SST-atmosphere coupling), 2) the West Pacific cooling (warming) co-occurring with El Niño (La Niña) is essential to simulate the phase relations between Pacific and Indian SST anomalies, 3) a non-linear atmosphere is needed to simulate the effect of the Indian Ocean variability onto the Pacific ENSO that is suggested by observations.
Haber, James E; Ira, Gregorz; Malkova, Anna; Sugawara, Neal
2004-01-01
Since the pioneering model for homologous recombination proposed by Robin Holliday in 1964, there has been great progress in understanding how recombination occurs at a molecular level. In the budding yeast Saccharomyces cerevisiae, one can follow recombination by physically monitoring DNA after the synchronous induction of a double-strand break (DSB) in both wild-type and mutant cells. A particularly well-studied system has been the switching of yeast mating-type (MAT) genes, where a DSB can be induced synchronously by expression of the site-specific HO endonuclease. Similar studies can be performed in meiotic cells, where DSBs are created by the Spo11 nuclease. There appear to be at least two competing mechanisms of homologous recombination: a synthesis-dependent strand annealing pathway leading to noncrossovers and a two-end strand invasion mechanism leading to formation and resolution of Holliday junctions (HJs), leading to crossovers. The establishment of a modified replication fork during DSB repair links gene conversion to another important repair process, break-induced replication. Despite recent revelations, almost 40 years after Holliday's model was published, the essential ideas he proposed of strand invasion and heteroduplex DNA formation, the formation and resolution of HJs, and mismatch repair, remain the basis of our thinking. PMID:15065659
Haber, James E; Ira, Gregorz; Malkova, Anna; Sugawara, Neal
2004-01-29
Since the pioneering model for homologous recombination proposed by Robin Holliday in 1964, there has been great progress in understanding how recombination occurs at a molecular level. In the budding yeast Saccharomyces cerevisiae, one can follow recombination by physically monitoring DNA after the synchronous induction of a double-strand break (DSB) in both wild-type and mutant cells. A particularly well-studied system has been the switching of yeast mating-type (MAT) genes, where a DSB can be induced synchronously by expression of the site-specific HO endonuclease. Similar studies can be performed in meiotic cells, where DSBs are created by the Spo11 nuclease. There appear to be at least two competing mechanisms of homologous recombination: a synthesis-dependent strand annealing pathway leading to noncrossovers and a two-end strand invasion mechanism leading to formation and resolution of Holliday junctions (HJs), leading to crossovers. The establishment of a modified replication fork during DSB repair links gene conversion to another important repair process, break-induced replication. Despite recent revelations, almost 40 years after Holliday's model was published, the essential ideas he proposed of strand invasion and heteroduplex DNA formation, the formation and resolution of HJs, and mismatch repair, remain the basis of our thinking.
Moon, Gi Jong; Ahn, Myung Mo; Kang, In Seok
2015-12-01
An analysis has been performed for the osmotic pressure of ionic liquids in the electric double layer (EDL). By using the electromechanical approach, we first derive a differential equation that is valid for computing the osmotic pressure in the continuum limit of any incompressible fluid in EDL. Then a specific model for ionic liquids proposed by Bazant et al. [M. Z. Bazant, B. D. Storey, and A. A. Kornyshev, Phys. Rev. Lett. 106, 046102 (2011)] is adopted for more detailed computation of the osmotic pressure. Ionic liquids are characterized by the correlation and the steric effects of ions and their effects are analyzed. In the low voltage cases, the correlation effect is dominant and the problem becomes linear. For this low voltage limit, a closed form formula is derived for predicting the osmotic pressure in EDL with no overlapping. It is found that the osmotic pressure decreases as the correlation effect increases. The osmotic pressures at the nanoslit surface and nanoslit centerline are also obtained for the low voltage limit. For the cases of moderately high voltage with high correlation factor, approximate formulas are derived for estimating osmotic pressure values based on the concept of a condensed layer near the electrode. In order to corroborate the results predicted by analytical studies, the full nonlinear model has been solved numerically.
Olivieri, Giuseppe; Russo, Maria Elena; Marzocchella, Antonio; Salatino, Piero
2011-01-01
A mathematical model of an aerobic biofilm reactor is presented to investigate the bifurcational patterns and the dynamical behavior of the reactor as a function of different key operating parameters. Suspended cells and biofilm are assumed to grow according to double limiting kinetics with phenol inhibition (carbon source) and oxygen limitation. The model presented by Russo et al. is extended to embody key features of the phenomenology of the granular-supported biofilm: biofilm growth and detachment, gas-liquid oxygen transport, phenol, and oxygen uptake by both suspended and immobilized cells, and substrate diffusion into the biofilm. Steady-state conditions and stability, and local dynamic behavior have been characterized. The multiplicity of steady states and their stability depend on key operating parameter values (dilution rate, gas-liquid mass transfer coefficient, biofilm detachment rate, and inlet substrate concentration). Small changes in the operating conditions may be coupled with a drastic change of the steady-state scenario with transcritical and saddle-node bifurcations. The relevance of concentration profiles establishing within the biofilm is also addressed. When the oxygen level in the liquid phase is <10% of the saturation level, the biofilm undergoes oxygen starvation and the active biofilm fraction becomes independent of the dilution rate. © 2011 American Institute of Chemical Engineers Biotechnol. Prog., 2011.
Glas, Julia; Dümcke, Sebastian; Zacher, Benedikt; Poron, Don; Gagneur, Julien; Tresch, Achim
2016-03-18
Hidden Markov models (HMMs) have been extensively used to dissect the genome into functionally distinct regions using data such as RNA expression or DNA binding measurements. It is a challenge to disentangle processes occurring on complementary strands of the same genomic region. We present the double-stranded HMM (dsHMM), a model for the strand-specific analysis of genomic processes. We applied dsHMM to yeast using strand specific transcription data, nucleosome data, and protein binding data for a set of 11 factors associated with the regulation of transcription.The resulting annotation recovers the mRNA transcription cycle (initiation, elongation, termination) while correctly predicting strand-specificity and directionality of the transcription process. We find that pre-initiation complex formation is an essentially undirected process, giving rise to a large number of bidirectional promoters and to pervasive antisense transcription. Notably, 12% of all transcriptionally active positions showed simultaneous activity on both strands. Furthermore, dsHMM reveals that antisense transcription is specifically suppressed by Nrd1, a yeast termination factor. PMID:26578558
Modeling and measuring double-frequency jitter in one-way master-slave networks
NASA Astrophysics Data System (ADS)
Ferreira, André Alves; Bueno, Átila Madureira; Piqueira, José R. C.
2009-05-01
The double-frequency jitter is one of the main problems in clock distribution networks. In previous works, some analytical and numerical aspects of this phenomenon were studied and results were obtained for one-way master-slave (OWMS) architectures. Here, an experimental apparatus is implemented, allowing to measure the power of the double-frequency signal and to confirm the theoretical conjectures.
Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M
2016-05-01
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks. PMID:26886976
Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M
2016-05-01
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
Generating double knockout mice to model genetic intervention for diabetic cardiomyopathy in humans.
Chavali, Vishalakshi; Nandi, Shyam Sundar; Singh, Shree Ram; Mishra, Paras Kumar
2014-01-01
Diabetes is a rapidly increasing disease that enhances the chances of heart failure twofold to fourfold (as compared to age and sex matched nondiabetics) and becomes a leading cause of morbidity and mortality. There are two broad classifications of diabetes: type1 diabetes (T1D) and type2 diabetes (T2D). Several mice models mimic both T1D and T2D in humans. However, the genetic intervention to ameliorate diabetic cardiomyopathy in these mice often requires creating double knockout (DKO). In order to assess the therapeutic potential of a gene, that specific gene is either overexpressed (transgenic expression) or abrogated (knockout) in the diabetic mice. If the genetic mice model for diabetes is used, it is necessary to create DKO with transgenic/knockout of the target gene to investigate the specific role of that gene in pathological cardiac remodeling in diabetics. One of the important genes involved in extracellular matrix (ECM) remodeling in diabetes is matrix metalloproteinase-9 (Mmp9). Mmp9 is a collagenase that remains latent in healthy hearts but induced in diabetic hearts. Activated Mmp9 degrades extracellular matrix (ECM) and increases matrix turnover causing cardiac fibrosis that leads to heart failure. Insulin2 mutant (Ins2+/-) Akita is a genetic model for T1D that becomes diabetic spontaneously at the age of 3-4 weeks and show robust hyperglycemia at the age of 10-12 weeks. It is a chronic model of T1D. In Ins2+/- Akita, Mmp9 is induced. To investigate the specific role of Mmp9 in diabetic hearts, it is necessary to create diabetic mice where Mmp9 gene is deleted. Here, we describe the method to generate Ins2+/-/Mmp9-/- (DKO) mice to determine whether the abrogation of Mmp9 ameliorates diabetic cardiomyopathy. PMID:25064116
Generating double knockout mice to model genetic intervention for diabetic cardiomyopathy in humans.
Chavali, Vishalakshi; Nandi, Shyam Sundar; Singh, Shree Ram; Mishra, Paras Kumar
2014-01-01
Diabetes is a rapidly increasing disease that enhances the chances of heart failure twofold to fourfold (as compared to age and sex matched nondiabetics) and becomes a leading cause of morbidity and mortality. There are two broad classifications of diabetes: type1 diabetes (T1D) and type2 diabetes (T2D). Several mice models mimic both T1D and T2D in humans. However, the genetic intervention to ameliorate diabetic cardiomyopathy in these mice often requires creating double knockout (DKO). In order to assess the therapeutic potential of a gene, that specific gene is either overexpressed (transgenic expression) or abrogated (knockout) in the diabetic mice. If the genetic mice model for diabetes is used, it is necessary to create DKO with transgenic/knockout of the target gene to investigate the specific role of that gene in pathological cardiac remodeling in diabetics. One of the important genes involved in extracellular matrix (ECM) remodeling in diabetes is matrix metalloproteinase-9 (Mmp9). Mmp9 is a collagenase that remains latent in healthy hearts but induced in diabetic hearts. Activated Mmp9 degrades extracellular matrix (ECM) and increases matrix turnover causing cardiac fibrosis that leads to heart failure. Insulin2 mutant (Ins2+/-) Akita is a genetic model for T1D that becomes diabetic spontaneously at the age of 3-4 weeks and show robust hyperglycemia at the age of 10-12 weeks. It is a chronic model of T1D. In Ins2+/- Akita, Mmp9 is induced. To investigate the specific role of Mmp9 in diabetic hearts, it is necessary to create diabetic mice where Mmp9 gene is deleted. Here, we describe the method to generate Ins2+/-/Mmp9-/- (DKO) mice to determine whether the abrogation of Mmp9 ameliorates diabetic cardiomyopathy.
ERIC Educational Resources Information Center
Pakenham, Kenneth I.; Samios, Christina; Sofronoff, Kate
2005-01-01
The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between…
Branz, Tanja; Faessler, Amand; Gutsche, Thomas; Lyubovitskij, Valery E.; Oexl, Bettina; Ivanov, Mikhail A.; Koerner, Juergen G.
2010-06-01
We study flavor-conserving radiative decays of double-heavy baryons using a manifestly Lorentz covariant constituent three-quark model. Decay rates are calculated and compared to each other in the full theory, keeping masses finite, and also in the heavy quark limit. We discuss in some detail hyperfine mixing effects.
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C.; Mason, John J.
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
NASA Astrophysics Data System (ADS)
Sammons, Daniel; Winfree, William P.; Burke, Eric; Ji, Shuiwang
2016-02-01
Nondestructive evaluation (NDE) utilizes a variety of techniques to inspect various materials for defects without causing changes to the material. X-ray computed tomography (CT) produces large volumes of three dimensional image data. Using the task of identifying delaminations in carbon fiber reinforced polymer (CFRP) composite CT, this work shows that it is possible to automate the analysis of these large volumes of CT data using a machine learning model known as a convolutional neural network (CNN). Further, tests on simulated data sets show that with a robust set of experimental data, it may be possible to go beyond just identification and instead accurately characterize the size and shape of the delaminations with CNNs.
Kinetic Energy of Hydrocarbons as a Function of Electron Density and Convolutional Neural Networks.
Yao, Kun; Parkhill, John
2016-03-01
We demonstrate a convolutional neural network trained to reproduce the Kohn-Sham kinetic energy of hydrocarbons from an input electron density. The output of the network is used as a nonlocal correction to conventional local and semilocal kinetic functionals. We show that this approximation qualitatively reproduces Kohn-Sham potential energy surfaces when used with conventional exchange correlation functionals. The density which minimizes the total energy given by the functional is examined in detail. We identify several avenues to improve on this exploratory work, by reducing numerical noise and changing the structure of our functional. Finally we examine the features in the density learned by the neural network to anticipate the prospects of generalizing these models.
Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji
2003-01-01
Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.
The effect of whitening transformation on pooling operations in convolutional autoencoders
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua
2015-12-01
Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.
Scattering theory for the radial H˙1/2-critical wave equation with a cubic convolution
NASA Astrophysics Data System (ADS)
Miao, Changxing; Zhang, Junyong; Zheng, Jiqiang
2015-12-01
In this paper, we study the global well-posedness and scattering for the wave equation with a cubic convolution ∂t2 u - Δu = ± (| x | - 3 *| u | 2) u in dimensions d ≥ 4. We prove that if the radial solution u with life-span I obeys (u, ut) ∈ Lt∞ (I ; H˙x 1 / 2 (Rd) × H˙x - 1 / 2 (Rd)), then u is global and scatters. By the strategy derived from concentration compactness, we show that the proof of the global well-posedness and scattering is reduced to disprove the existence of two scenarios: soliton-like solution and high to low frequency cascade. Making use of the No-waste Duhamel formula and double Duhamel trick, we deduce that these two scenarios enjoy the additional regularity by the bootstrap argument of [7]. This together with virial analysis implies the energy of such two scenarios is zero and so we get a contradiction.
JACKSON VL
2011-08-31
The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.
Three-dimensional inspiratory flow in a double bifurcation airway model
NASA Astrophysics Data System (ADS)
Jalal, Sahar; Nemes, Andras; Van de Moortele, Tristan; Schmitter, Sebastian; Coletti, Filippo
2016-09-01
The flow in an idealized airway model is investigated for the steady inhalation case. The geometry consists of a symmetric planar double bifurcation that reflects the anatomical proportions of the human bronchial tree, and a wide range of physiologically relevant Reynolds numbers ( Re = 100-5000) is considered. Using magnetic resonance velocimetry, we analyze the three-dimensional fields of velocity and vorticity, along with flow descriptors that characterize the longitudinal and lateral dispersion. In agreement with previous studies, the symmetry of the flow partitioning is broken even at the lower Reynolds numbers, and at the second bifurcation, the fluid favors the medial branches over the lateral ones. This trend reaches a plateau around Re = 2000, above which the turbulent inflow results in smoothed mean velocity gradients. This also reduces the streamwise momentum flux, which is a measure of the longitudinal dispersion by the mean flow. The classic Dean-type counter-rotating vortices are observed in the first-generation daughter branches as a result of the local curvature. In the granddaughter branches, however, the secondary flows are determined by the local curvature only for the lower flow regimes ( Re ≤ 250), in which case the classic Dean mechanism prevails. At higher flow regimes, the field is instead dominated by streamwise vortices extending from the daughter into the medial granddaughter branches, where they rotate in the opposite direction with respect to Dean vortices. Circulation and secondary flow intensity show a similar trend as the momentum flux, increasing with Reynolds number up to Re = 2000 and then dropping due to turbulent dissipation of vorticity. The streamwise vortices interact both with each other and with the airway walls, and for Re > 500 they can become stronger in the medial granddaughter than in the upstream daughter branches. With respect to realistic airway models, the idealized geometry produces weaker secondary flows
Enhanced Climatic Warming in the Tibetan Plateau Due to Double CO2: A Model Study
NASA Technical Reports Server (NTRS)
Chen, Baode; Chao, Winston C.; Liu, Xiao-Dong; Lau, William K. M. (Technical Monitor)
2001-01-01
The NCAR (National Center for Atmospheric Research) regional climate model (RegCM2) with time-dependent lateral meteorological fields provided by a 130-year transient increasing CO2 simulation of the NCAR Climate System Model (CSM) has been used to investigate the mechanism of enhanced ground temperature warming over the TP (Tibetan Plateau). From our model results, a remarkable tendency of warming increasing with elevation is found for the winter season, and elevation dependency of warming is not clearly recognized in the summer season. This simulated feature of elevation dependency of ground temperature is consistent with observations. Based on an analysis of surface energy budget, the short wave solar radiation absorbed at the surface plus downward long wave flux reaching the surface shows a strong elevation dependency, and is mostly responsible for enhanced surface warming over the TP. At lower elevations, the precipitation forced by topography is enhanced due to an increase in water vapor supply resulted from a warming in the atmosphere induced by doubling CO2. This precipitation enhancement must be associated with an increase in clouds, which results in a decline in solar flux reaching surface. At higher elevations, large snow depletion is detected in the 2xCO2run. It leads to a decrease in albedo, therefore more solar flux is absorbed at the surface. On the other hand, much more uniform increase in downward long wave flux reaching the surface is found. The combination of these effects (i.e. decrease in solar flux at lower elevations, increase in solar flux at higher elevation and more uniform increase in downward long wave flux) results in elevation dependency of enhanced ground temperature warming over the TP.
Del Valle, Pedro L; Trifillis, Anna; Ruegg, Charles E; Kane, Andrew S
2002-04-01
Rabbit kidney proximal convoluted tubule (RPCT) and proximal straight tubule (RPST) cells were independently isolated and cultured. The kinetics of the sodium-dependent glucose transport was characterized by determining the uptake of the glucose analog alpha-methylglucopyranoside. Cell culture and assay conditions used in these experiments were based on previous experiments conducted on the renal cell line derived from the whole kidney of the Yorkshire pig (LLC-PK1). Results indicated the presence of two distinct sodium-dependent glucose transporters in rabbit renal cells: a relatively high-capacity, low-affinity transporter (V(max) = 2.28 +/- 0.099 nmoles/mg protein min, Km = 4.1 +/- 0.27 mM) in RPCT cells and a low-capacity, high-affinity transporter (V(max) = 0.45 +/- 0.076 nmoles/mg protein min, K(m) = 1.7 +/- 0.43 mM) in RPST cells. A relatively high-capacity, low-affinity transporter (V(max) = 1.68 +/- 0.215 nmoles/mg protein min, Km = 4.9 +/- 0.23 mM) was characterized in LLC-PK1 cells. Phlorizin inhibited the uptake of alpha-methylglucopyranoside in proximal convoluted, proximal straight, and LLC-PK1 cells by 90, 50, and 90%, respectively. Sodium-dependent glucose transport in all three cell types was specific for hexoses. These data are consistent with the kinetic heterogeneity of sodium-dependent glucose transport in the S1-S2 and S3 segments of the mammalian renal proximal tubule. The RPCT-RPST cultured cell model is novel, and this is the first report of sodium-dependent glucose transport characterization in primary cultures of proximal straight tubule cells. Our results support the use of cultured monolayers of RPCT and RPST cells as a model system to evaluate segment-specific differences in these renal cell types.
NASA Astrophysics Data System (ADS)
Mankovich, Christopher; Fortney, Jonathan J.; Nettelmann, Nadine; Moore, Kevin
2016-10-01
Hydrogen and helium unmix when sufficiently cool, and this bears on the thermal evolution of all cool giant planets at or below one Jupiter mass. Over the past few years, ab initio simulations have put us in the era of quantitative predictions for this H-He immiscibility at megabar pressures. We present models for the thermal evolution of Jupiter, including its evolving helium distribution following one such ab initio H-He phase diagram. After 4 Gyr of homogeneous evolution, differentiation establishes a helium gradient between 1 and 2 Mbar that dynamically stabilizes the fluid to overturning convection. The result is a region undergoing overstable double-diffusive convection (ODDC), whose relatively weak vertical heat transport maintains a superadiabatic temperature gradient. With a general parameterization for the ODDC efficiency, the models can reconcile Jupiter's intrinsic flux, atmospheric helium content, and mean radius at the age of the solar system if the H-He phase diagram is translated to cooler temperatures.We cast our nonadiabatic thermal evolution models in a Markov chain Monte Carlo parameter estimation framework, retrieving the total heavy element mass, the superadiabaticity of the deep temperature gradient, and the phase diagram temperature offset. Models using the interpolated Saumon, Chabrier and van Horn (1995) equation of state (SCvH-I) favor very inefficient ODDC such that the deep temperature gradient is strongly superadiabatic, forming a thermal boundary layer that allows the molecular envelope to cool quickly while the deeper interior (most of the planet's mass) actually heats up over time. If we modulate the overall cooling time with an additional free parameter, mimicking the effect of a colder or warmer EOS, the models favor those that are colder than SCvH-I; this class of EOS is also favored by shock experiments. The models in this scenario have more modest deep superadiabaticities such that the envelope cools more gradually and the deep
NASA Astrophysics Data System (ADS)
Bartlett, J.; Hardy, G.; Hepburn, I. D.; Brockley-Blatt, C.; Coker, P.; Crofts, E.; Winter, B.; Milward, S.; Stafford-Allen, R.; Brownhill, M.; Reed, J.; Linder, M.; Rando, N.
2010-09-01
This paper describes the design, development and performance of the engineering model double adiabatic demagnetization refrigerator (dADR) built and tested under contract to the European Space Agency for its former mission XEUS (now IXO). The dADR operates from a 4 K bath and has a measured recycle and hold time (with a parasitic load of 2.34 μW) at 50 mK of 15 h and 10 h, respectively. It is shown that the performance can be significantly improved by operating from a lower bath temperature and replacing the current heat switches with tungsten magnetoresistive (MR) heat switches, which significantly reduce the parasitic heat load. Performing the latter gives an anticipated recycle and hold time of 2 and 29 h (with a 1 μW applied heat load in addition to the parasitic load), respectively. Such improved performance allows for a reduction in mass of the dADR from 32 kg to 10 kg by operating from a 2.5 K bath (which could be reduced further by optimising the magnet design). Ultimately, continuous operation could be achieved by linking two dADRs to a common detector stage and operating them alternately. Based on this design the mass of the continuous ADR is estimated to be about 4.5 kg.
Mathematical model of a double-coil inductive transducer for measuring electrical conductivity
Kusmierz, Jozef
2007-08-15
A technique for the contactless measurement of the electrical conductivity of conducting materials using a double-coil inductive transducer is presented. A mathematical model of the transducer has been created and it consists of two cylindrical coils and a tested sample in the form of a cylinder coaxial with the coils. A processing function of the transducer is defined as the ratio of voltages between terminals of the measurement coil with and without the test sample. This processing function depends on the conductivity of the test sample, the dimensions of the sample and of both coils of the transducer (the measurement coil and the excitation coil), and the frequency of the current supplied to the excitation coil. An analytical formula for the processing function is derived; analysis of graphs of this function in different formats enables us to evaluate the influence of all the essential parameters of the transducer. This is a necessary step for both transducer optimization and carrying out of the conductivity measurement of the investigated materials. In order to verify the theoretical predictions, experimental investigations have been performed using a computerized data acquisition system. First, an experimental validation of the obtained analytical formula has been completed using an aluminum sample of known conductivity. Then, the conductivity measurements of a sample made of brass have been carried out. The obtained experimental results confirm the high accuracy of the theoretical analysis.
Chmely, S. C.; McKinney, K. A.; Lawrence, K. R.; Sturgeon, M.; Katahira, R.; Beckham, G. T.
2013-01-01
Lignin is an underutilized value stream in current biomass conversion technologies because there exist no economic and technically feasible routes for lignin depolymerization and upgrading. Base-catalyzed deconstruction (BCD) has been applied for lignin depolymerization (e.g., the Kraft process) in the pulp and paper industry for more than a century using aqueous-phase media. However, these efforts require treatment to neutralize the resulting streams, which adds significantly to the cost of lignin deconstruction. To circumvent the need for downstream treatment, here we report recent advances in the synthesis of layered double hydroxide and metal oxide catalysts to be applied to the BCD of lignin. These catalysts may prove more cost-effective than liquid-phase, non-recyclable base, and their use obviates downstream processing steps such as neutralization. Synthetic procedures for various transition-metal containing catalysts, detailed kinetics measurements using lignin model compounds, and results of the application of these catalysts to biomass-derived lignin will be presented.
Double Roles of Macrophages in Human Neuroimmune Diseases and Their Animal Models
Fan, Xueli; Zhang, Hongliang; Cheng, Yun; Jiang, Xinmei; Zhu, Jie
2016-01-01
Macrophages are important immune cells of the innate immune system that are involved in organ-specific homeostasis and contribute to both pathology and resolution of diseases including infections, cancer, obesity, atherosclerosis, and autoimmune disorders. Multiple lines of evidence point to macrophages as a remarkably heterogeneous cell type. Different phenotypes of macrophages exert either proinflammatory or anti-inflammatory roles depending on the cytokines and other mediators that they are exposed to in the local microenvironment. Proinflammatory macrophages secrete detrimental molecules to induce disease development, while anti-inflammatory macrophages produce beneficial mediators to promote disease recovery. The conversion of the phenotypes of macrophages can regulate the initiation, development, and recovery of autoimmune diseases. Human neuroimmune diseases majorly include multiple sclerosis (MS), neuromyelitis optica (NMO), myasthenia gravis (MG), and Guillain-Barré syndrome (GBS) and macrophages contribute to the pathogenesis of these neuroimmune diseases. In this review, we summarize the double roles of macrophage in neuroimmune diseases and their animal models to further explore the mechanisms of macrophages involved in the pathogenesis of these disorders, which may provide a potential therapeutic approach for these disorders in the future. PMID:27034594
MULTI-DIMENSIONAL MODELS FOR DOUBLE DETONATION IN SUB-CHANDRASEKHAR MASS WHITE DWARFS
Moll, R.; Woosley, S. E.
2013-09-10
Using two-dimensional and three-dimensional simulations, we study the ''robustness'' of the double detonation scenario for Type Ia supernovae, in which a detonation in the helium shell of a carbon-oxygen white dwarf induces a secondary detonation in the underlying core. We find that a helium detonation cannot easily descend into the core unless it commences (artificially) well above the hottest layer calculated for the helium shell in current presupernova models. Compressional waves induced by the sliding helium detonation, however, robustly generate hot spots which trigger a detonation in the core. Our simulations show that this is true even for non-axisymmetric initial conditions. If the helium is ignited at multiple points, then the internal waves can pass through one another or be reflected, but this added complexity does not defeat the generation of the hot spot. The ignition of very low-mass helium shells depends on whether a thermonuclear runaway can simultaneously commence in a sufficiently large region.
Bailly, Lucie; Henrich, Nathalie; Pelorson, Xavier
2010-05-01
Occurrences of period-doubling are found in human phonation, in particular for pathological and some singing phonations such as Sardinian A Tenore Bassu vocal performance. The combined vibration of the vocal folds and the ventricular folds has been observed during the production of such low pitch bass-type sound. The present study aims to characterize the physiological correlates of this acoustical production and to provide a better understanding of the physical interaction between ventricular fold vibration and vocal fold self-sustained oscillation. The vibratory properties of the vocal folds and the ventricular folds during phonation produced by a professional singer are analyzed by means of acoustical and electroglottographic signals and by synchronized glottal images obtained by high-speed cinematography. The periodic variation in glottal cycle duration and the effect of ventricular fold closing on glottal closing time are demonstrated. Using the detected glottal and ventricular areas, the aerodynamic behavior of the laryngeal system is simulated using a simplified physical modeling previously validated in vitro using a larynx replica. An estimate of the ventricular aperture extracted from the in vivo data allows a theoretical prediction of the glottal aperture. The in vivo measurements of the glottal aperture are then compared to the simulated estimations. PMID:21117769
Tomography by iterative convolution - Empirical study and application to interferometry
NASA Technical Reports Server (NTRS)
Vest, C. M.; Prikryl, I.
1984-01-01
An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.
Convolution properties for certain classes of multivalent functions
NASA Astrophysics Data System (ADS)
Sokól, Janusz; Trojnar-Spelina, Lucyna
2008-01-01
Recently N.E. Cho, O.S. Kwon and H.M. Srivastava [Nak Eun Cho, Oh Sang Kwon, H.M. Srivastava, Inclusion relationships and argument properties for certain subclasses of multivalent functions associated with a family of linear operators, J. Math. Anal. Appl. 292 (2004) 470-483] have introduced the class of multivalent analytic functions and have given a number of results. This class has been defined by means of a special linear operator associated with the Gaussian hypergeometric function. In this paper we have extended some of the previous results and have given other properties of this class. We have made use of differential subordinations and properties of convolution in geometric function theory.
Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks
NASA Astrophysics Data System (ADS)
Zhang, Kaipeng; Zhang, Zhanpeng; Li, Zhifeng; Qiao, Yu
2016-10-01
Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this paper, we propose a deep cascaded multi-task framework which exploits the inherent correlation between them to boost up their performance. In particular, our framework adopts a cascaded structure with three stages of carefully designed deep convolutional networks that predict face and landmark location in a coarse-to-fine manner. In addition, in the learning process, we propose a new online hard sample mining strategy that can improve the performance automatically without manual sample selection. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging FDDB and WIDER FACE benchmark for face detection, and AFLW benchmark for face alignment, while keeps real time performance.
Deep convolutional neural networks for ATR from SAR imagery
NASA Astrophysics Data System (ADS)
Morgan, David A. E.
2015-05-01
Deep architectures for classification and representation learning have recently attracted significant attention within academia and industry, with many impressive results across a diverse collection of problem sets. In this work we consider the specific application of Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) data from the MSTAR public release data set. The classification performance achieved using a Deep Convolutional Neural Network (CNN) on this data set was found to be competitive with existing methods considered to be state-of-the-art. Unlike most existing algorithms, this approach can learn discriminative feature sets directly from training data instead of requiring pre-specification or pre-selection by a human designer. We show how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target and discuss potential practical applications.
Drug-Drug Interaction Extraction via Convolutional Neural Networks.
Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong
2016-01-01
Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831
Drug-Drug Interaction Extraction via Convolutional Neural Networks.
Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong
2016-01-01
Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%.
Enhanced Line Integral Convolution with Flow Feature Detection
NASA Technical Reports Server (NTRS)
Lane, David; Okada, Arthur
1996-01-01
The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.
Plane-wave decomposition by spherical-convolution microphone array
NASA Astrophysics Data System (ADS)
Rafaely, Boaz; Park, Munhum
2001-05-01
Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.
Fast convolution with free-space Green's functions
NASA Astrophysics Data System (ADS)
Vico, Felipe; Greengard, Leslie; Ferrando, Miguel
2016-10-01
We introduce a fast algorithm for computing volume potentials - that is, the convolution of a translation invariant, free-space Green's function with a compactly supported source distribution defined on a uniform grid. The algorithm relies on regularizing the Fourier transform of the Green's function by cutting off the interaction in physical space beyond the domain of interest. This permits the straightforward application of trapezoidal quadrature and the standard FFT, with superalgebraic convergence for smooth data. Moreover, the method can be interpreted as employing a Nystrom discretization of the corresponding integral operator, with matrix entries which can be obtained explicitly and rapidly. This is of use in the design of preconditioners or fast direct solvers for a variety of volume integral equations. The method proposed permits the computation of any derivative of the potential, at the cost of an additional FFT.
Invariant Descriptor Learning Using a Siamese Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Chen, L.; Rottensteiner, F.; Heipke, C.
2016-06-01
In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.
He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian
2015-09-01
Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy. PMID:26734567
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.
NASA Astrophysics Data System (ADS)
Callari, C.; Federico, F.
2000-04-01
Laboratory consolidation of structured clayey soils is analysed in this paper. The research is carried out by two different methods. The first one treats the soil as an isotropic homogeneous equivalent Double Porosity (DP) medium. The second method rests on the extensive application of the Finite Element Method (FEM) to combinations of different soils, composing 2D or fully 3D ordered structured media that schematically discretize the complex material. Two reference problems, representing typical situations of 1D laboratory consolidation of structured soils, are considered. For each problem, solution is obtained through integration of the equations governing the consolidation of the DP medium as well as via FEM applied to the ordered schemes composed of different materials. The presence of conventional experimental devices to ensure the drainage of the sample is taken into account through appropriate boundary conditions. Comparison of FEM results with theoretical results clearly points out the ability of the DP model to represent consolidation processes of structurally complex soils. Limits of applicability of the DP model may arise when the rate of fluid exchange between the two porous systems is represented through oversimplified relations. Results of computations, obtained having assigned reasonable values to the meso-structural and to the experimental apparatus parameters, point out that a partially efficient drainage apparatus strongly influences the distribution along the sample and the time evolution of the interstitial water pressure acting in both systems of pores. Data of consolidation tests in a Rowe's cell on samples of artificially fissured clays reported in the literature are compared with the analytical and numerical results showing a significant agreement.
An equilibrium double-twist model for the radial structure of collagen fibrils.
Brown, Aidan I; Kreplak, Laurent; Rutenberg, Andrew D
2014-11-14
Mammalian tissues contain networks and ordered arrays of collagen fibrils originating from the periodic self-assembly of helical 300 nm long tropocollagen complexes. The fibril radius is typically between 25 to 250 nm, and tropocollagen at the surface appears to exhibit a characteristic twist-angle with respect to the fibril axis. Similar fibril radii and twist-angles at the surface are observed in vitro, suggesting that these features are controlled by a similar self-assembly process. In this work, we propose a physical mechanism of equilibrium radius control for collagen fibrils based on a radially varying double-twist alignment of tropocollagen within a collagen fibril. The free-energy of alignment is similar to that of liquid crystalline blue phases, and we employ an analytic Euler-Lagrange and numerical free energy minimization to determine the twist-angle between the molecular axis and the fibril axis along the radial direction. Competition between the different elastic energy components, together with a surface energy, determines the equilibrium radius and twist-angle at the fibril surface. A simplified model with a twist-angle that is linear with radius is a reasonable approximation in some parameter regimes, and explains a power-law dependence of radius and twist-angle at the surface as parameters are varied. Fibril radius and twist-angle at the surface corresponding to an equilibrium free-energy minimum are consistent with existing experimental measurements of collagen fibrils. Remarkably, in the experimental regime, all of our model parameters are important for controlling equilibrium structural parameters of collagen fibrils. PMID:25238208
Gallmeier, F. X.; Iverson, E. B.; Lu, W.; Baxter, D. V.; Muhrer, G.; Ansell, S.
2016-01-08
Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. It has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. Particularly, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX codemore » to include a single-crystal neutron scattering model and neutron reflection/refraction physics. Furthermore, we have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of
Gallmeier, Franz X; Iverson, Erik B; Lu, Wei; Baxter, David V; Muhrer, Guenter; Ansell, Stuart
2016-01-01
Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of simulations
NASA Astrophysics Data System (ADS)
Gallmeier, F. X.; Iverson, E. B.; Lu, W.; Baxter, D. V.; Muhrer, G.; Ansell, S.
2016-04-01
Neutron transport simulation codes are indispensable tools for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modeled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4, and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential phenomena for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal's Bragg cut-off from locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon and void layers. Finally we simulated the convoluted moderator experiments described by Iverson et al. and found satisfactory agreement between the measurements and the simulations performed with the tools we have developed.
Sannino, Annalisa
2016-03-01
This study explores what human conduct looks like when research embraces uncertainty and distance itself from the dominant methodological demands of control and predictability. The context is the waiting experiment originally designed in Kurt Lewin's research group, discussed by Vygotsky as an instance among a range of experiments related to his notion of double stimulation. Little attention has been paid to this experiment, despite its great heuristic potential for charting the terrain of uncertainty and agency in experimental settings. Behind the notion of double stimulation lays Vygotsky's distinctive view of human beings' ability to intentionally shape their actions. Accordingly, human beings in situations of uncertainty and cognitive incongruity can rely on artifacts which serve the function of auxiliary motives and which help them undertake volitional actions. A double stimulation model depicting how such actions emerge is tested in a waiting experiment conducted with collectives, in contrast with a previous waiting experiment conducted with individuals. The model, validated in the waiting experiment with individual participants, applies only to a limited extent to the collectives. The analysis shows the extent to which double stimulation takes place in the waiting experiment with collectives, the differences between the two experiments, and what implications can be drawn for an expanded view on experiments.
NASA Astrophysics Data System (ADS)
Qian, Shan-Jie
2015-05-01
The mechanism of formation for double-peaked optical outbursts observed in blazar OJ 287 is studied. It is shown that they could be explained in terms of a lighthouse effect for superluminal optical knots ejected from the center of the galaxy that move along helical magnetic fields. It is assumed that the orbital motion of the secondary black hole in the supermassive binary black hole system induces the 12-year quasi-periodicity in major optical outbursts by the interaction with the disk around the primary black hole. This interaction between the secondary black hole and the disk of the primary black hole (e.g. tidal effects or magnetic coupling) excites or injects plasmons (or relativistic plasmas plus magnetic field) into the jet which form superluminal knots. These knots are assumed to move along helical magnetic field lines to produce the optical double-peaked outbursts by the lighthouse effect. The four double-peaked outbursts observed in 1972, 1983, 1995 and 2005 are simulated using this model. It is shown that such lighthouse models are quite plausible and feasible for fitting the double-flaring behavior of the outbursts. The main requirement may be that in OJ 287 there exists a rather long (~40-60 pc) highly collimated zone, where the lighthouse effect occurs.
Sannino, Annalisa
2016-03-01
This study explores what human conduct looks like when research embraces uncertainty and distance itself from the dominant methodological demands of control and predictability. The context is the waiting experiment originally designed in Kurt Lewin's research group, discussed by Vygotsky as an instance among a range of experiments related to his notion of double stimulation. Little attention has been paid to this experiment, despite its great heuristic potential for charting the terrain of uncertainty and agency in experimental settings. Behind the notion of double stimulation lays Vygotsky's distinctive view of human beings' ability to intentionally shape their actions. Accordingly, human beings in situations of uncertainty and cognitive incongruity can rely on artifacts which serve the function of auxiliary motives and which help them undertake volitional actions. A double stimulation model depicting how such actions emerge is tested in a waiting experiment conducted with collectives, in contrast with a previous waiting experiment conducted with individuals. The model, validated in the waiting experiment with individual participants, applies only to a limited extent to the collectives. The analysis shows the extent to which double stimulation takes place in the waiting experiment with collectives, the differences between the two experiments, and what implications can be drawn for an expanded view on experiments. PMID:26318436
FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION
Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang
2016-01-01
The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement.
Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni
2015-01-01
Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient's response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a "radiomics" approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models.
NASA Astrophysics Data System (ADS)
Cho, Woong; Suh, Tae-Suk; Park, Jeong-Hoon; Xing, Lei; Lee, Jeong-Woo
2012-12-01
A collapsed cone convolution algorithm was applied to a treatment planning system for the calculation of dose distributions. The distribution of beam fluences was determined using a three-source model by considering the source strengths of the primary beam, the beam scattered from the primary collimators, and an extra beam scattered from extra structures in the gantry head of the radiotherapy treatment machine. The distribution of the total energy released per unit mass (TERMA) was calculated from the distribution of the fluence by considering several physical effects such as the emission of poly-energetic photon spectra, the attenuation of the beam fluence in a medium, the horn effect, the beam-softening effect, and beam transmission through collimators or multi-leaf collimators. The distribution of the doses was calculated by using the convolution of the distribution of the TERMA and the poly-energetic kernel. The distribution of the kernel was approximated to several tens of collapsed cone lines to express the energies transferred by the electrons that originated from the interactions between the photons and the medium. The implemented algorithm was validated by comparing the calculated percentage depth doses (PDDs) and dose profiles with the measured PDDs and relevant profiles. In addition, the dose distribution for an irregular-shaped radiation field was verified by comparing the calculated doses with the measured doses obtained via EDR2 film dosimetry and with the calculated doses obtained using a different treatment planning system based on the pencil beam algorithm (Eclipse, Varian, Palo Alto, USA). The majority of the calculated doses for the PDDs, the profiles, and the irregular-shaped field showed good agreement with the measured doses to within a 2% dose difference, except in the build-up regions. The implemented algorithm was proven to be efficient and accurate for clinical purposes in radiation therapy, and it was found to be easily implementable in
Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks
Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni
2015-01-01
Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298
Kromer, M.; Sim, S. A.; Fink, M.; Roepke, F. K.; Seitenzahl, I. R.; Hillebrandt, W.
2010-08-20
In the double-detonation scenario for Type Ia supernovae, it is suggested that a detonation initiates in a shell of helium-rich material accreted from a companion star by a sub-Chandrasekhar-mass white dwarf. This shell detonation drives a shock front into the carbon-oxygen white dwarf that triggers a secondary detonation in the core. The core detonation results in a complete disruption of the white dwarf. Earlier studies concluded that this scenario has difficulties in accounting for the observed properties of Type Ia supernovae since the explosion ejecta are surrounded by the products of explosive helium burning in the shell. Recently, however, it was proposed that detonations might be possible for much less massive helium shells than previously assumed (Bildsten et al.). Moreover, it was shown that even detonations of these minimum helium shell masses robustly trigger detonations of the carbon-oxygen core (Fink et al.). Therefore, it is possible that the impact of the helium layer on observables is less than previously thought. Here, we present time-dependent multi-wavelength radiative transfer calculations for models with minimum helium shell mass and derive synthetic observables for both the optical and {gamma}-ray spectral regions. These differ strongly from those found in earlier simulations of sub-Chandrasekhar-mass explosions in which more massive helium shells were considered. Our models predict light curves that cover both the range of brightnesses and the rise and decline times of observed Type Ia supernovae. However, their colors and spectra do not match the observations. In particular, their B - V colors are generally too red. We show that this discrepancy is mainly due to the composition of the burning products of the helium shell of the Fink et al. models which contain significant amounts of titanium and chromium. Using a toy model, we also show that the burning products of the helium shell depend crucially on its initial composition. This leads us
Sivachenko, Anna; Gordon, Hannah B.; Kimball, Suzanne S.; Gavin, Erin J.; Bonkowsky, Joshua L.; Letsou, Anthea
2016-01-01
ABSTRACT Debilitating neurodegenerative conditions with metabolic origins affect millions of individuals worldwide. Still, for most of these neurometabolic disorders there are neither cures nor disease-modifying therapies, and novel animal models are needed for elucidation of disease pathology and identification of potential therapeutic agents. To date, metabolic neurodegenerative disease has been modeled in animals with only limited success, in part because existing models constitute analyses of single mutants and have thus overlooked potential redundancy within metabolic gene pathways associated with disease. Here, we present the first analysis of a very-long-chain acyl-CoA synthetase (ACS) double mutant. We show that the Drosophila bubblegum (bgm) and double bubble (dbb) genes have overlapping functions, and that the consequences of double knockout of both bubblegum and double bubble in the fly brain are profound, affecting behavior and brain morphology, and providing the best paradigm to date for an animal model of adrenoleukodystrophy (ALD), a fatal childhood neurodegenerative disease associated with the accumulation of very-long-chain fatty acids. Using this more fully penetrant model of disease to interrogate brain morphology at the level of electron microscopy, we show that dysregulation of fatty acid metabolism via disruption of ACS function in vivo is causal of neurodegenerative pathologies that are evident in both neuronal cells and their supporting cell populations, and leads ultimately to lytic cell death in affected areas of the brain. Finally, in an extension of our model system to the study of human disease, we describe our identification of an individual with leukodystrophy who harbors a rare mutation in SLC27a6 (encoding a very-long-chain ACS), a human homolog of bgm and dbb. PMID:26893370
Aquifer response to stream-stage and recharge variations. II. Convolution method and applications
Barlow, P.M.; DeSimone, L.A.; Moench, A.F.
2000-01-01
In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped
Digital Elevation Models Aid the Analysis of Double Layered Ejecta (DLE) Impact Craters on Mars
NASA Astrophysics Data System (ADS)
Mouginis-Mark, P. J.; Boyce, J. M.; Garbeil, H.
2014-12-01
Considerable debate has recently taken place concerning the origin of the inner and outer ejecta layers of double layered ejecta (DLE) craters on Mars. For craters in the diameter range ~10 to ~25 km, the inner ejecta layer of DLE craters displays characteristic grooves extending from the rim crest, and has led investigators to propose three hypotheses for their formation: (1) deposition of the primary ejecta and subsequent surface scouring by either atmospheric vortices or a base surge; (2) emplacement through a landslide of the near-rim crest ejecta; and (3) instabilities (similar to Gortler vortices) generated by high flow-rate, and high granular temperatures. Critical to resolving between these models is the topographic expression of both the ejecta layer and the groove geometry. To address this problem, we have made several digital elevation models (DEMs) from CTX and HiRISE stereo pairs using the Ames Stereo Pipeline at scales of 24 m/pixel and 1 m/pixel, respectively. These DEMs allow several key observations to be made that bear directly upon the origin of the grooves associated with DLE craters: (1) Grooves formed on the sloping ejecta layer surfaces right up to the preserved crater rim; (2) There is clear evidence that grooves traverse the topographic boundary between the inner and outer ejecta layers; and (3) There are at least two different sets of radial grooves, with smaller grooves imprinted upon the larger grooves. There are "deep-wide" grooves that have a width of ~200 m and a depth of ~10 m, and there are "shallow-narrow" grooves with a width of <50 m and depth <5 m. These two scales of grooves are not consistent with their formation analogous to a landslide. Two different sets of grooves would imply that, simultaneously, two different depths to the flow would have to exist if the grooves were formed by shear within the flow, something that is not physically possible. All three observations can only be consistent with a model of groove formation
Quang, Daniel; Xie, Xiaohui
2016-06-20
Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.
Quang, Daniel; Xie, Xiaohui
2016-06-20
Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ. PMID:27084946
Quang, Daniel; Xie, Xiaohui
2016-01-01
Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory ‘grammar’ to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ. PMID:27084946
NASA Astrophysics Data System (ADS)
Qiu, Linjing; Liu, Xiaodong
2016-04-01
Increases in the atmospheric CO2 concentration affect both the global climate and plant metabolism, particularly for high-altitude ecosystems. Because of the limitations of field experiments, it is difficult to evaluate the responses of vegetation to CO2 increases and separate the effects of CO2 and associated climate change using direct observations at a regional scale. Here, we used the Community Earth System Model (CESM, version 1.0.4) to examine these effects. Initiated from bare ground, we simulated the vegetation composition and productivity under two CO2 concentrations (367 and 734 ppm) and associated climate conditions to separate the comparative contributions of doubled CO2 and CO2-induced climate change to the vegetation dynamics on the Tibetan Plateau (TP). The results revealed whether the individual effect of doubled CO2 and its induced climate change or their combined effects caused a decrease in the foliage projective cover (FPC) of C3 arctic grass on the TP. Both doubled CO2 and climate change had a positive effect on the FPC of the temperate and tropical tree plant functional types (PFTs) on the TP, but doubled CO2 led to FPC decreases of C4 grass and broadleaf deciduous shrubs, whereas the climate change resulted in FPC decrease in C3 non-arctic grass and boreal needleleaf evergreen trees. Although the combination of the doubled CO2 and associated climate change increased the area-averaged leaf area index (LAI), the effect of doubled CO2 on the LAI increase (95 %) was larger than the effect of CO2-induced climate change (5 %). Similarly, the simulated gross primary productivity (GPP) and net primary productivity (NPP) were primarily sensitive to the doubled CO2, compared with the CO2-induced climate change, which alone increased the regional GPP and NPP by 251.22 and 87.79 g C m-2 year-1, respectively. Regionally, the vegetation response was most noticeable in the south-eastern TP. Although both doubled CO2 and associated climate change had a
The electronic states of a double carbon vacancy defect in pyrene: a model study for graphene.
Machado, Francisco B C; Aquino, Adélia J A; Lischka, Hans
2015-05-21
The electronic states occurring in a double vacancy defect for graphene nanoribbons have been calculated in detail based on a pyrene model. Extended ab initio calculations using the MR configuration interaction (MRCI) method have been performed to describe in a balanced way the manifold of electronic states derived from the dangling bonds created by initial removal of two neighboring carbon atoms from the graphene network. In total, this study took into account the characterization of 16 electronic states (eight singlets and eight triplets) considering unrelaxed and relaxed defect structures. The ground state was found to be of (1)Ag character with around 50% closed shell character. The geometry optimization process leads to the formation of two five-membered rings in a pentagon-octagon-pentagon (5-8-5) structure. The closed shell character increases thereby to ∼70%; the analysis of unpaired density shows only small contributions confirming the chemical stability of that entity. For the unrelaxed structure the first five excited states ((3)B3g, (3)B2u, (3)B1u, (3)Au and (1)Au) are separated from the ground state by less than 2.5 eV. For comparison, unrestricted density functional theory (DFT) calculations using several types of functionals have been performed within different symmetry subspaces defined by the open shell orbitals. Comparison with the MRCI results gave good agreement in terms of finding the (1)Ag state as a ground state and in assigning the lowest excited states. Linear interpolation curves between the unrelaxed and relaxed defect structures also showed good agreement between the two classes of methods opening up the possibilities of using extended nanoflakes for multistate investigations at the DFT level. PMID:25905682
The electronic states of a double carbon vacancy defect in pyrene: a model study for graphene.
Machado, Francisco B C; Aquino, Adélia J A; Lischka, Hans
2015-05-21
The electronic states occurring in a double vacancy defect for graphene nanoribbons have been calculated in detail based on a pyrene model. Extended ab initio calculations using the MR configuration interaction (MRCI) method have been performed to describe in a balanced way the manifold of electronic states derived from the dangling bonds created by initial removal of two neighboring carbon atoms from the graphene network. In total, this study took into account the characterization of 16 electronic states (eight singlets and eight triplets) considering unrelaxed and relaxed defect structures. The ground state was found to be of (1)Ag character with around 50% closed shell character. The geometry optimization process leads to the formation of two five-membered rings in a pentagon-octagon-pentagon (5-8-5) structure. The closed shell character increases thereby to ∼70%; the analysis of unpaired density shows only small contributions confirming the chemical stability of that entity. For the unrelaxed structure the first five excited states ((3)B3g, (3)B2u, (3)B1u, (3)Au and (1)Au) are separated from the ground state by less than 2.5 eV. For comparison, unrestricted density functional theory (DFT) calculations using several types of functionals have been performed within different symmetry subspaces defined by the open shell orbitals. Comparison with the MRCI results gave good agreement in terms of finding the (1)Ag state as a ground state and in assigning the lowest excited states. Linear interpolation curves between the unrelaxed and relaxed defect structures also showed good agreement between the two classes of methods opening up the possibilities of using extended nanoflakes for multistate investigations at the DFT level.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; Limbrick, Daniel B.; Black, Jeffrey D.
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; Limbrick, Daniel B.; Black, Jeffrey D.
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.
Black, Dolores A.; Robinson, William H.; Limbrick, Daniel B.; Black, Jeffrey D.; Wilcox, Ian Z.
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. An accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. A small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. Furthermore, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.
NASA Astrophysics Data System (ADS)
Wang, Hailong; Ho, Derek Y. H.; Lawton, Wayne; Wang, Jiao; Gong, Jiangbin
2013-11-01
Recent studies have established that, in addition to the well-known kicked-Harper model (KHM), an on-resonance double-kicked rotor (ORDKR) model also has Hofstadter's butterfly Floquet spectrum, with strong resemblance to the standard Hofstadter spectrum that is a paradigm in studies of the integer quantum Hall effect. Earlier it was shown that the quasienergy spectra of these two dynamical models (i) can exactly overlap with each other if an effective Planck constant takes irrational multiples of 2π and (ii) will be different if the same parameter takes rational multiples of 2π. This work makes detailed comparisons between these two models, with an effective Planck constant given by 2πM/N, where M and N are coprime and odd integers. It is found that the ORDKR spectrum (with two periodic kicking sequences having the same kick strength) has one flat band and N-1 nonflat bands with the largest bandwidth decaying in a power law as ˜KN+2, where K is a kick strength parameter. The existence of a flat band is strictly proven and the power-law scaling, numerically checked for a number of cases, is also analytically proven for a three-band case. By contrast, the KHM does not have any flat band and its bandwidths scale linearly with K. This is shown to result in dramatic differences in dynamical behavior, such as transient (but extremely long) dynamical localization in ORDKR, which is absent in the KHM. Finally, we show that despite these differences, there exist simple extensions of the KHM and ORDKR model (upon introducing an additional periodic phase parameter) such that the resulting extended KHM and ORDKR model are actually topologically equivalent, i.e., they yield exactly the same Floquet-band Chern numbers and display topological phase transitions at the same kick strengths. A theoretical derivation of this topological equivalence is provided. These results are also of interest to our current understanding of quantum-classical correspondence considering that the
Cho, Edward Namkyu; Shin, Yong Hyeon; Yun, Ilgu
2014-11-07
A compact quantum correction model for a symmetric double gate (DG) metal-oxide-semiconductor field-effect transistor (MOSFET) is investigated. The compact quantum correction model is proposed from the concepts of the threshold voltage shift (ΔV{sub TH}{sup QM}) and the gate capacitance (C{sub g}) degradation. First of all, ΔV{sub TH}{sup QM} induced by quantum mechanical (QM) effects is modeled. The C{sub g} degradation is then modeled by introducing the inversion layer centroid. With ΔV{sub TH}{sup QM} and the C{sub g} degradation, the QM effects are implemented in previously reported classical model and a comparison between the proposed quantum correction model and numerical simulation results is presented. Based on the results, the proposed quantum correction model can be applicable to the compact model of DG MOSFET.
NASA Technical Reports Server (NTRS)
Kuan, Gary M.; Dekens, Frank G.
2006-01-01
The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.
Pietrobon, D; Caplan, S R
1986-11-18
The results of double-inhibitor and uncoupler-inhibitor titrations have been simulated and analyzed with a linear model of delocalized protonic coupling using linear nonequilibrium thermodynamics. A detailed analysis of the changes of the intermediate delta muH induced by different combinations of inhibitors of the proton pumps has been performed. It is shown that with linear flow-force relationships the published experimental results of uncoupler-inhibitor titrations are not necessarily inconsistent with, and those of double-inhibitor titrations are inconsistent with, a delocalized chemiosmotic model of energy coupling in the presence of a negligible leak. Also shown and discussed are how the results are affected by a nonnegligible leak and to what extent the shape of the titration curves can be used to discriminate between localized and delocalized mechanisms of energy coupling.
Davy, John L
2010-02-01
This paper presents a revised theory for predicting the sound insulation of double leaf cavity walls that removes an approximation, which is usually made when deriving the sound insulation of a double leaf cavity wall above the critical frequencies of the wall leaves due to the airborne transmission across the wall cavity. This revised theory is also used as a correction below the critical frequencies of the wall leaves instead of a correction due to Sewell [(1970). J. Sound Vib. 12, 21-32]. It is found necessary to include the "stud" borne transmission of the window frames when modeling wide air gap double glazed windows. A minimum value of stud transmission is introduced for use with resilient connections such as steel studs. Empirical equations are derived for predicting the effective sound absorption coefficient of wall cavities without sound absorbing material. The theory is compared with experimental results for double glazed windows and gypsum plasterboard cavity walls with and without sound absorbing material in their cavities. The overall mean, standard deviation, maximum, and minimum of the differences between experiment and theory are -0.6 dB, 3.1 dB, 10.9 dB at 1250 Hz, and -14.9 dB at 160 Hz, respectively. PMID:20136207
An iterated three-layer model of the double layer with permanent dipoles
NASA Astrophysics Data System (ADS)
Macdonald, J. Ross; Liu, S. H.
1983-03-01
There does not exist a theory of the ionic double layer at a completely blocking metal electrode in liquid electrolytes which is adequate in the charge/potential region where ions and solvent molecules begin to approach saturated conditions. Under these conditions, a continuum theory, such as that of Gouy and Chapman (GC), becomes entirely inadequate. Here the problem is attacked in a semi-discrete way by first partitioning the space charge region into layers parallel to the planar blocking electrode. Each layer is part of a cubic lattice with lattice-site spacing determined by the pure solvent concentration. Lattice sites may be occupied by ions of either sign or by solvent molecules, taken as spheres having a permanent dipole moment. The solvent molecule finite-length dipoles are then approximated by slabs of constant point-dipole polarization. Thus each of the planes parallel to the electrode is a locus of ion centers, and the polarization is accounted for by equal and opposite charge layers equidistant on either side of an ionic charge layer. The mean polarization and ionic concentration in each three-layer region are determined self-consistently by free energy minimization, and electrostatic equations are employed to couple the electrical conditions in one layer to those adjacent. This ion-dipole model (IDM) is solved self-consistently for arbitrary molarity in two regimes: the weak-field situation where the electrode charge approaches zero, and the arbitrary field-strength regime. In the first case, an, exact, closed-form solution is obtained which reduces to that of GC in the appropriate limit, but numerical analysis is required in the second situation. The present treatment provides a more realistic account of the electrical effects of discrete solvent dipoles than do those treatments, such as the GC model, which represent them entirely by a background, non-saturable, or even saturable, bulk dielectric constant. Here polarization saturation enters naturally
NASA Astrophysics Data System (ADS)
Liu, Zi-Xin; Wen, Sheng-Hui; Li, Ming
2008-06-01
A combination of the iterative perturbation theory (ITP) of the dynamical mean field theory (DMFT) and coherent-potential approximation (CPA) is generalized to the double exchange model with orbital degeneracy. The Hubbard interaction and the off-diagonal components for the hopping matrix tmnij(m ≠ n) are considered in our calculation of spectrum and optical conductivity. The numerical results show that the effects of the non-diagonal hopping matrix elements are important.
Accardi, Antonio; Barth, Ingo; Kühn, Oliver; Manz, Jörn
2010-10-28
Quantum dynamics simulations of double proton transfer (DPT) in the model porphine, starting from a nonequilibrium initial state, demonstrate that a switch from synchronous (or concerted) to sequential (or stepwise or successive) breaking and making of two bonds is possible. For this proof of principle, we employ the simple model of Smedarchina, Z.; Siebrand, W.; Fernández-Ramos, A. J. Chem. Phys. 2007, 127, 174513, with reasonable definition for the domains D for the reactant R, the product P, the saddle point SP2 which is crossed during synchronous DPT, and two intermediates I = I(1) + I(2) for two alternative routes of sequential DPT. The wavepacket dynamics is analyzed in terms of various properties, from qualitative conclusions based on the patterns of the densities and flux densities, until quantitative results for the time evolutions of the populations or probabilities P(D)(t) of the domains D = R, P, SP2, and I, and the associated net fluxes F(D)(t) as well as the domain-to-domain (DTD) fluxes F(D1,D2) between neighboring domains D1 and D2. Accordingly, the initial synchronous mechanism of the first forward reaction is due to the directions of various momenta, which are imposed on the wavepacket by the L-shaped part of the steep repulsive wall of the potential energy surface (PES), close to the minimum for the reactant. At the same time, these momenta cause initial squeezing followed by rapid dispersion of the representative wavepacket. The switch from the synchronous to sequential mechanism is called indirect, because it is mediated by two effects: First, the wavepacket dispersion; second, relief reflections of the broadened wavepacket from wide regions of the inverse L-shaped steep repulsive wall of the PES close to the minimum for the product, preferably to the domains I = I(1) + I(2) for the sequential DPT during the first back reaction, and also during the second forward reaction, etc. Our analysis also discovers a variety of minor effects, such as
Double-porosity models for a fissured groundwater reservoir with fracture skin.
Moench, A.F.
1984-01-01
Theories of flow to a well in a double-porosity groundwater reservoir are modified to incorporate effects of a thin layer of low-permeability material or fracture skin that may be present at fracture-block interfaces as a result of mineral deposition or alteration. The commonly used theory for flow in double-porosity formations that is based upon the assumption of pseudo-steady state block-to-fissure flow is shown to be a special case of the theory presented in this paper. The latter is based on the assumption of transient block-to-fissure flow with fracture skin.-from Author
Black, Dolores A.; Robinson, William H.; Limbrick, Daniel B.; Black, Jeffrey D.; Wilcox, Ian Z.
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. An accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional modelmore » based on one double-exponential source can be incomplete. A small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. Furthermore, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
ERIC Educational Resources Information Center
Jaubert, Jean-Noël; Privat, Romain
2014-01-01
The double-tangent construction of coexisting phases is an elegant approach to visualize all the multiphase binary systems that satisfy the equality of chemical potentials and to select the stable state. In this paper, we show how to perform the double-tangent construction of coexisting phases for binary systems modeled with the gamma-phi…
Convoluted nozzle design for the RL10 derivative 2B engine
NASA Technical Reports Server (NTRS)
1985-01-01
The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
2014-01-01
This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.
Toward an optimal convolutional neural network for traffic sign recognition
NASA Astrophysics Data System (ADS)
Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec
2015-12-01
Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.
Convolution neural-network-based detection of lung structures
NASA Astrophysics Data System (ADS)
Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.
1994-05-01
Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.
Cell osmotic water permeability of isolated rabbit proximal convoluted tubules.
Carpi-Medina, P; González, E; Whittembury, G
1983-05-01
Cell osmotic water permeability, Pcos, of the peritubular aspect of the proximal convoluted tubule (PCT) was measured from the time course of cell volume changes subsequent to the sudden imposition of an osmotic gradient, delta Cio, across the cell membrane of PCT that had been dissected and mounted in a chamber. The possibilities of artifact were minimized. The bath was vigorously stirred, the solutions could be 95% changed within 0.1 s, and small osmotic gradients (10-20 mosM) were used. Thus, the osmotically induced water flow was a linear function of delta Cio and the effect of the 70-microns-thick unstirred layers was negligible. In addition, data were extrapolated to delta Cio = 0. Pcos for PCT was 41.6 (+/- 3.5) X 10(-4) cm3 X s-1 X osM-1 per cm2 of peritubular basal area. The standing gradient osmotic theory for transcellular osmosis is incompatible with this value. Published values for Pcos of PST are 25.1 X 10(-4), and for the transepithelial permeability Peos values are 64 X 10(-4) for PCT and 94 X 10(-4) for PST, in the same units. These results indicate that there is room for paracellular water flow in both nephron segments and that the magnitude of the transcellular and paracellular water flows may vary from one segment of the proximal tubule to another. PMID:6846543
Adapting line integral convolution for fabricating artistic virtual environment
NASA Astrophysics Data System (ADS)
Lee, Jiunn-Shyan; Wang, Chung-Ming
2003-04-01
Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.
Multi-modal vertebrae recognition using Transformed Deep Convolution Network.
Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo
2016-07-01
Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. PMID:27104497
A deep convolutional neural network for recognizing foods
NASA Astrophysics Data System (ADS)
Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec
2015-12-01
Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.
Convolutional networks for fast, energy-efficient neuromorphic computing
Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.
2016-01-01
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489
Deep convolutional neural networks for classifying GPR B-scans
NASA Astrophysics Data System (ADS)
Besaw, Lance E.; Stimac, Philip J.
2015-05-01
Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.
Method for Veterbi decoding of large constraint length convolutional codes
NASA Astrophysics Data System (ADS)
Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun
1988-05-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
Method for Veterbi decoding of large constraint length convolutional codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)
1988-01-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
Kano, Shinya; Maeda, Kosuke; Majima, Yutaka; Tanaka, Daisuke; Sakamoto, Masanori; Teranishi, Toshiharu
2015-10-07
We present the analysis of chemically assembled double-dot single-electron transistors using orthodox model considering offset charges. First, we fabricate chemically assembled single-electron transistors (SETs) consisting of two Au nanoparticles between electroless Au-plated nanogap electrodes. Then, extraordinary stable Coulomb diamonds in the double-dot SETs are analyzed using the orthodox model, by considering offset charges on the respective quantum dots. We determine the equivalent circuit parameters from Coulomb diamonds and drain current vs. drain voltage curves of the SETs. The accuracies of the capacitances and offset charges on the quantum dots are within ±10%, and ±0.04e (where e is the elementary charge), respectively. The parameters can be explained by the geometrical structures of the SETs observed using scanning electron microscopy images. Using this approach, we are able to understand the spatial characteristics of the double quantum dots, such as the relative distance from the gate electrode and the conditions for adsorption between the nanogap electrodes.
NASA Astrophysics Data System (ADS)
Kano, Shinya; Maeda, Kosuke; Tanaka, Daisuke; Sakamoto, Masanori; Teranishi, Toshiharu; Majima, Yutaka
2015-10-01
We present the analysis of chemically assembled double-dot single-electron transistors using orthodox model considering offset charges. First, we fabricate chemically assembled single-electron transistors (SETs) consisting of two Au nanoparticles between electroless Au-plated nanogap electrodes. Then, extraordinary stable Coulomb diamonds in the double-dot SETs are analyzed using the orthodox model, by considering offset charges on the respective quantum dots. We determine the equivalent circuit parameters from Coulomb diamonds and drain current vs. drain voltage curves of the SETs. The accuracies of the capacitances and offset charges on the quantum dots are within ±10%, and ±0.04e (where e is the elementary charge), respectively. The parameters can be explained by the geometrical structures of the SETs observed using scanning electron microscopy images. Using this approach, we are able to understand the spatial characteristics of the double quantum dots, such as the relative distance from the gate electrode and the conditions for adsorption between the nanogap electrodes.
NASA Astrophysics Data System (ADS)
Kanemura, Shinya; Kaneta, Kunio; Machida, Naoki; Odori, Shinya; Shindou, Tetsuo
2016-07-01
In the composite Higgs models, originally proposed by Georgi and Kaplan, the Higgs boson is a pseudo Nambu-Goldstone boson (pNGB) of spontaneous breaking of a global symmetry. In the minimal version of such models, global SO(5) symmetry is spontaneously broken to SO(4), and the pNGBs form an isospin doublet field, which corresponds to the Higgs doublet in the Standard Model (SM). Predicted coupling constants of the Higgs boson can in general deviate from the SM predictions, depending on the compositeness parameter. The deviation pattern is determined also by the detail of the matter sector. We comprehensively study how the model can be tested via measuring single and double production processes of the Higgs boson at the LHC and future electron-positron colliders. The possibility to distinguish the matter sector among the minimal composite Higgs models is also discussed. In addition, we point out differences in the cross section of double Higgs boson production from the prediction in other new physics models.
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.
Gopishankar, N; Bisht, R K
2014-06-01
Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.
NASA Astrophysics Data System (ADS)
Liu, Qing; He, Ya-Ling
2015-11-01
In this paper, a double multiple-relaxation-time lattice Boltzmann model is developed for simulating transient solid-liquid phase change problems in porous media at the representative elementary volume scale. The model uses two different multiple-relaxation-time lattice Boltzmann equations, one for the flow field and the other for the temperature field with nonlinear latent heat source term. The model is based on the generalized non-Darcy formulation, and the solid-liquid interface is traced through the liquid fraction which is determined by the enthalpy-based method. The present model is validated by numerical simulations of conduction melting in a semi-infinite space, solidification in a semi-infinite corner, and convection melting in a square cavity filled with porous media. The numerical results demonstrate the efficiency and accuracy of the present model for simulating transient solid-liquid phase change problems in porous media.
Le, Guigao; Zhang, Junfeng
2011-05-01
In this paper, we propose a general Poisson-Boltzmann model for electric double layer (EDL) analysis with the position dependence of dielectric permittivity considered. This model provides physically reasonable property profiles in the EDL region, and it is then utilized to investigate the depletion layer effect on EDL structure and interaction near hydrophobic surfaces. Our results show that both the electric potential and the interaction pressure between surfaces decrease due to the lower permittivity in the depletion layer. The reduction becomes more profound at larger variation magnitude and range. This trend is in general agreement with that observed from the previous stepwise model; however, that model has overestimated the influence of permittivity variation effect. For the thin depletion layer and the relative thick EDL, our calculation indicates that the permittivity variation effect on EDL usually can be neglected. Furthermore, our model can be readily extended to study the permittivity variation in EDL due to ion accumulation and hydration in the EDL region.
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Einaudi, Franco (Technical Monitor)
2001-01-01
It has been known for more than a decade that an aqua-planet model with globally uniform sea surface temperature and solar insolation angle can generate ITCZ (intertropical convergence zone). Previous studies have shown that the ITCZ under such model settings can be changed between a single ITCZ over the equator and a double ITCZ straddling the equator through one of several measures. These measures include switching to a different cumulus parameterization scheme, changes within the cumulus parameterization scheme, and changes in other aspects of the model design such as horizontal resolution. In this paper an interpretation for these findings is offered. The latitudinal location of the ITCZ is the latitude where the balance of two types of attraction on the ITCZ, both due to earth's rotation, exists. The first type is equator-ward and is directly related to the earth's rotation and thus not sensitive to model design changes. The second type is poleward and is related to the convective circulation and thus is sensitive to model design changes. Due to the shape of the attractors, the balance of the two types of attractions is reached either at the equator or more than 10 degrees away from the equator. The former case results in a single ITCZ over the equator and the latter case a double ITCZ straddling the equator.
NASA Astrophysics Data System (ADS)
Yan, Liang; Li, Wei; Jiao, Zongxia; Chen, I.-Ming
2015-12-01
The space utilization of linear switched reluctance machine is relatively low, which unavoidably constrains the improvement of system output performance. The objective of this paper is to propose a novel tubular linear switched reluctance motor with double excitation windings. The employment of double excitation helps to increase the electromagnetic force of the system. Furthermore, the installation of windings on both stator and mover can make the structure more compact and increase the system force density. The design concept and operating principle are presented. Following that, the major structure parameters of the system are determined. Subsequently, electromagnetic force and reluctance are formulated analytically based on equivalent magnetic circuits, and the result is validated with numerical computation. Then, a research prototype is developed, and experiments are conducted on the system output performance. It shows that the proposed design of electric linear machine can achieve higher thrust force compared with conventional linear switched reluctance machines.
Structural optimization and model fabrication of a double-ring deployable antenna truss
NASA Astrophysics Data System (ADS)
Dai, Lu; Guan, Fuling; Guest, James K.
2014-02-01
This paper explores the design of a new type of deployable antenna system composed of a double-ring deployable truss, prestressed cable nets, and a metallic reflector mesh. The primary novelty is the double-ring deployable truss, which is found to significantly enhance the stiffness of the entire antenna over single-ring systems with relatively low mass gain. Structural optimization was used to minimize the system mass subject to constraints on system stiffness and member section availability. Both genetic algorithms (GA) and gradient-based optimizers are employed. The optimized system results were obtained and incorporated into a 4.2-m scaled system prototype, which was then experimentally tested for dynamic properties. Practical considerations such as the maximum number of truss sides and their effects on system performances were also discussed.
Convolutional neural networks for P300 detection with application to brain-computer interfaces.
Cecotti, Hubert; Gräser, Axel
2011-03-01
A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the direct communication between human and computers by analyzing brain measurements. Oddball paradigms are used in BCI to generate event-related potentials (ERPs), like the P300 wave, on targets selected by the user. A P300 speller is based on this principle, where the detection of P300 waves allows the user to write characters. The P300 speller is composed of two classification problems. The first classification is to detect the presence of a P300 in the electroencephalogram (EEG). The second one corresponds to the combination of different P300 responses for determining the right character to spell. A new method for the detection of P300 waves is presented. This model is based on a convolutional neural network (CNN). The topology of the network is adapted to the detection of P300 waves in the time domain. Seven classifiers based on the CNN are proposed: four single classifiers with different features set and three multiclassifiers. These models are tested and compared on the Data set II of the third BCI competition. The best result is obtained with a multiclassifier solution with a recognition rate of 95.5 percent, without channel selection before the classification. The proposed approach provides also a new way for analyzing brain activities due to the receptive field of the CNN models.
Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification
Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.
2016-01-01
Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN.
Age, double porosity, and simple reaction modifications for the MOC3D ground-water transport model
Goode, Daniel J.
1999-01-01
This report documents modifications for the MOC3D ground-water transport model to simulate (a) ground-water age transport; (b) double-porosity exchange; and (c) simple but flexible retardation, decay, and zero-order growth reactions. These modifications are incorporated in MOC3D version 3.0. MOC3D simulates the transport of a single solute using the method-ofcharacteristics numerical procedure. The age of ground water, that is the time since recharge to the saturated zone, can be simulated using the transport model with an additional source term of unit strength, corresponding to the rate of aging. The output concentrations of the model are in this case the ages at all locations in the model. Double porosity generally refers to a separate immobilewater phase within the aquifer that does not contribute to ground-water flow but can affect solute transport through diffusive exchange. The solute mass exchange rate between the flowing water in the aquifer and the immobile-water phase is the product of the concentration difference between the two phases and a linear exchange coefficient. Conceptually, double porosity can approximate the effects of dead-end pores in a granular porous media, or matrix diffusion in a fractured-rock aquifer. Options are provided for decay and zero-order growth reactions within the immobilewater phase. The simple reaction terms here extend the original model, which included decay and retardation. With these extensions, (a) the retardation factor can vary spatially within each model layer, (b) the decay rate coefficient can vary spatially within each model layer and can be different for the dissolved and sorbed phases, and (c) a zero-order growth reaction is added that can vary spatially and can be different in the dissolved and sorbed phases. The decay and growth reaction terms also can change in time to account for changing geochemical conditions during transport. The report includes a description of the theoretical basis of the model, a
Theory of wave propagation in partially saturated double-porosity rocks: a triple-layer patchy model
NASA Astrophysics Data System (ADS)
Sun, Weitao; Ba, Jing; Carcione, José M.
2016-04-01
Wave-induced local fluid flow is known as a key mechanism to explain the intrinsic wave dissipation in fluid-saturated rocks. Understanding the relationship between the acoustic properties of rocks and fluid patch distributions is important to interpret the observed seismic wave phenomena. A triple-layer patchy (TLP) model is proposed to describe the P-wave dissipation process in a double-porosity media saturated with two immiscible fluids. The double-porosity rock consists of a solid matrix with unique host porosity and inclusions which contain the second type of pores. Two immiscible fluids are considered in concentric spherical patches, where the inner pocket and the outer sphere are saturated with different fluids. The kinetic and dissipation energy functions of local fluid flow (LFF) in the inner pocket are formulated through oscillations in spherical coordinates. The wave propagation equations of the TLP model are based on Biot's theory and the corresponding Lagrangian equations. The P-wave dispersion and attenuation caused by the Biot friction mechanism and the local fluid flow (related to the pore structure and the fluid distribution) are obtained by a plane-wave analysis from the Christoffel equations. Numerical examples and laboratory measurements indicate that P-wave dispersion and attenuation are significantly influenced by the spatial distributions of both, the solid heterogeneity and the fluid saturation distribution. The TLP model is in reasonably good agreement with White's and Johnson's models. However, differences in phase velocity suggest that the heterogeneities associated with double-porosity and dual-fluid distribution should be taken into account when describing the P-wave dispersion and attenuation in partially saturated rocks.
NASA Astrophysics Data System (ADS)
Hawcroft, Matt; Haywood, Jim M.; Collins, Mat; Jones, Andy; Jones, Anthony C.; Stephens, Graeme
2016-06-01
A causal link has been invoked between inter-hemispheric albedo, cross-equatorial energy transport and the double-Intertropical Convergence Zone (ITCZ) bias in climate models. Southern Ocean cloud biases are a major determinant of inter-hemispheric albedo biases in many models, including HadGEM2-ES, a fully coupled model with a dynamical ocean. In this study, targeted albedo corrections are applied in the Southern Ocean to explore the dynamical response to artificially reducing these biases. The Southern Hemisphere jet increases in strength in response to the increased tropical-extratropical temperature gradient, with increased energy transport into the mid-latitudes in the atmosphere, but no improvement is observed in the double-ITCZ bias or atmospheric cross-equatorial energy transport, a finding which supports other recent work. The majority of the adjustment in energy transport in the tropics is achieved in the ocean, with the response further limited to the Pacific Ocean. As a result, the frequently argued teleconnection between the Southern Ocean and tropical precipitation biases is muted. Further experiments in which tropical longwave biases are also reduced do not yield improvement in the representation of the tropical atmosphere. These results suggest that the dramatic improvements in tropical precipitation that have been shown in previous studies may be a function of the lack of dynamical ocean and/or the simplified hemispheric albedo bias corrections applied in that work. It further suggests that efforts to correct the double ITCZ problem in coupled models that focus on large-scale energetic controls will prove fruitless without improvements in the representation of atmospheric processes.
Parker, M. M.; Court, D. A.; Preiter, K.; Belfort, M.
1996-01-01
Many group I introns encode endonucleases that promote intron homing by initiating a double-strand break-mediated homologous recombination event. A td intron-phage λ model system was developed to analyze exon homology effects on intron homing and determine the role of the λ 5'-3' exonuclease complex (Redαβ) in the repair event. Efficient intron homing depended on exon lengths in the 35- to 50-bp range, although homing levels remained significantly elevated above nonbreak-mediated recombination with as little as 10 bp of flanking homology. Although precise intron insertion was demonstrated with extremely limiting exon homology, the complete absence of one exon produced illegitimate events on the side of heterology. Interestingly, intron inheritance was unaffected by the presence of extensive heterology at the double-strand break in wild-type λ, provided that sufficient homology between donor and recipient was present distal to the heterologous sequences. However, these events involving heterologous ends were absolutely dependent on an intact Red exonuclease system. Together these results indicate that heterologous sequences can participate in double-strand break-mediated repair and imply that intron transposition to heteroallelic sites might occur at break sites within regions of limited or no homology. PMID:8807281
NASA Astrophysics Data System (ADS)
Capuano, Paolo; De Lauro, Enza; De Martino, Salvatore; Falanga, Mariarosaria; Petrosino, Simona
2015-04-01
One of the main challenge in volcano-seismological literature is to locate and characterize the source of volcano/tectonic seismic activity. This passes through the identification at least of the onset of the main phases, i.e. the body waves. Many efforts have been made to solve the problem of a clear separation of P and S phases both from a theoretical point of view and developing numerical algorithms suitable for specific cases (see, e.g., Küperkoch et al., 2012). Recently, a robust automatic procedure has been implemented for extracting the prominent seismic waveforms from continuously recorded signals and thus allowing for picking the main phases. The intuitive notion of maximum non-gaussianity is achieved adopting techniques which involve higher-order statistics in frequency domain., i.e, the Convolutive Independent Component Analysis (CICA). This technique is successful in the case of the blind source separation of convolutive mixtures. In seismological framework, indeed, seismic signals are thought as the convolution of a source function with path, site and the instrument response. In addition, time-delayed versions of the same source exist, due to multipath propagation typically caused by reverberations from some obstacle. In this work, we focus on the Volcano Tectonic (VT) activity at Campi Flegrei Caldera (Italy) during the 2006 ground uplift (Ciaramella et al., 2011). The activity was characterized approximately by 300 low-magnitude VT earthquakes (Md < 2; for the definition of duration magnitude, see Petrosino et al. 2008). Most of them were concentrated in distinct seismic sequences with hypocenters mainly clustered beneath the Solfatara-Accademia area, at depths ranging between 1 and 4 km b.s.l.. The obtained results show the clear separation of P and S phases: the technique not only allows the identification of the S-P time delay giving the timing of both phases but also provides the independent waveforms of the P and S phases. This is an enormous
NASA Astrophysics Data System (ADS)
Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.
2016-04-01
This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of
Fabrizio, Mary C.; Nichols, James D.; Hines, James E.; Swanson, Bruce L.; Schram, Stephen T.
1999-01-01
Data from mark-recapture studies are used to estimate population rates such as exploitation, survival, and growth. Many of these applications assume negligible tag loss, so tag shedding can be a significant problem. Various tag shedding models have been developed for use with data from double-tagging experiments, including models to estimate constant instantaneous rates, time-dependent rates, and type I and II shedding rates. In this study, we used conditional (on recaptures) multinomial models implemented using the program SURVIV (G.C. White. 1983. J. Wildl. Manage. 47: 716-728) to estimate tag shedding rates of lake trout (Salvelinus namaycush) and explore various potential sources of variation in these rates. We applied the models to data from several long-term double-tagging experiments with Lake Superior lake trout and estimated shedding rates for anchor tags in hatchery-reared and wild fish and for various tag types applied in these experiments. Estimates of annual tag retention rates for lake trout were fairly high (80-90%), but we found evidence (among wild fish only) that retention rates may be significantly lower in the first year due to type I losses. Annual retention rates for some tag types varied between male and female fish, but there was no consistent pattern across years. Our estimates of annual tag retention rates will be used in future studies of survival rates for these fish.
NASA Astrophysics Data System (ADS)
Gupta, R. P.; Banerjee, Malay; Chandra, Peeyush
2014-07-01
The present study investigates a prey predator type model for conservation of ecological resources through taxation with nonlinear harvesting. The model uses the harvesting function as proposed by Agnew (1979) [1] which accounts for the handling time of the catch and also the competition between standard vessels being utilized for harvesting of resources. In this paper we consider a three dimensional dynamic effort prey-predator model with Holling type-II functional response. The conditions for uniform persistence of the model have been derived. The existence and stability of bifurcating periodic solution through Hopf bifurcation have been examined for a particular set of parameter value. Using numerical examples it is shown that the system admits periodic, quasi-periodic and chaotic solutions. It is observed that the system exhibits periodic doubling route to chaos with respect to tax. Many forms of complexities such as chaotic bands (including periodic windows, period-doubling bifurcations, period-halving bifurcations and attractor crisis) and chaotic attractors have been observed. Sensitivity analysis is carried out and it is observed that the solutions are highly dependent to the initial conditions. Pontryagin's Maximum Principle has been used to obtain optimal tax policy to maximize the monetary social benefit as well as conservation of the ecosystem.
NASA Astrophysics Data System (ADS)
Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.
1997-07-01
A numerical model of HgCdTe solidification was implemented using finite the element code FIDAP. Model verification was done using both experimental data and numerical test problems. The model was used to eluate possible effects of double- diffusion convection in molten material, and microgravity level on concentration distribution in the solidified HgCdTe. Particular attention was paid to incorporation of HgCdTe phase diagram. It was found, that below a critical microgravity amplitude, the maximum convective velocity in the melt appears virtually independent on the microgravity vector orientation. Good agreement between predicted interface shape and an interface obtained experimentally by quenching was achieved. The results of numerical modeling are presented in the form of video film.
A convolution-superposition dose calculation engine for GPUs
Hissoiny, Sami; Ozell, Benoit; Despres, Philippe
2010-03-15
Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.
Deep convolutional networks for pancreas segmentation in CT imaging
NASA Astrophysics Data System (ADS)
Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.
2015-03-01
Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Arévalo, John; Judkins, Alexander; Madabhushi, Anant; González, Fabio
2015-12-01
Convolutional neural networks (CNN) have been very successful at addressing different computer vision tasks thanks to their ability to learn image representations directly from large amounts of labeled data. Features learned from a dataset can be used to represent images from a different dataset via an approach called transfer learning. In this paper we apply transfer learning to the challenging task of medulloblastoma tumor differentiation. We compare two different CNN models which were previously trained in two different domains (natural and histopathology images). The first CNN is a state-of-the-art approach in computer vision, a large and deep CNN with 16-layers, Visual Geometry Group (VGG) CNN. The second (IBCa-CNN) is a 2-layer CNN trained for invasive breast cancer tumor classification. Both CNNs are used as visual feature extractors of histopathology image regions of anaplastic and non-anaplastic medulloblastoma tumor from digitized whole-slide images. The features from the two models are used, separately, to train a softmax classifier to discriminate between anaplastic and non-anaplastic medulloblastoma image regions. Experimental results show that the transfer learning approach produce competitive results in comparison with the state of the art approaches for IBCa detection. Results also show that features extracted from the IBCa-CNN have better performance in comparison with features extracted from the VGG-CNN. The former obtains 89.8% while the latter obtains 76.6% in terms of average accuracy.
Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data
NASA Astrophysics Data System (ADS)
Anirudh, Rushil; Thiagarajan, Jayaraman J.; Bremer, Timo; Kim, Hyojin
2016-03-01
Early detection of lung nodules is currently the one of the most effective ways to predict and treat lung cancer. As a result, the past decade has seen a lot of focus on computer aided diagnosis (CAD) of lung nodules, whose goal is to efficiently detect, segment lung nodules and classify them as being benign or malignant. Effective detection of such nodules remains a challenge due to their arbitrariness in shape, size and texture. In this paper, we propose to employ 3D convolutional neural networks (CNN) to learn highly discriminative features for nodule detection in lieu of hand-engineered ones such as geometric shape or texture. While 3D CNNs are promising tools to model the spatio-temporal statistics of data, they are limited by their need for detailed 3D labels, which can be prohibitively expensive when compared obtaining 2D labels. Existing CAD methods rely on obtaining detailed labels for lung nodules, to train models, which is also unrealistic and time consuming. To alleviate this challenge, we propose a solution wherein the expert needs to provide only a point label, i.e., the central pixel of of the nodule, and its largest expected size. We use unsupervised segmentation to grow out a 3D region, which is used to train the CNN. Using experiments on the SPIE-LUNGx dataset, we show that the network trained using these weak labels can produce reasonably low false positive rates with a high sensitivity, even in the absence of accurate 3D labels.
Wang, Hainan; Thiele, Alexander; Pilon, Laurent
2013-11-15
This paper presents a generalized modified Poisson–Nernst–Planck (MPNP) model derived from first principles based on excess chemical potential and Langmuir activity coefficient to simulate electric double-layer dynamics in asymmetric electrolytes. The model accounts simultaneously for (1) asymmetric electrolytes with (2) multiple ion species, (3) finite ion sizes, and (4) Stern and diffuse layers along with Ohmic potential drop in the electrode. It was used to simulate cyclic voltammetry (CV) measurements for binary asymmetric electrolytes. The results demonstrated that the current density increased significantly with decreasing ion diameter and/or increasing valency |z_{i}| of either ion species. By contrast, the ion diffusion coefficients affected the CV curves and capacitance only at large scan rates. Dimensional analysis was also performed, and 11 dimensionless numbers were identified to govern the CV measurements of the electric double layer in binary asymmetric electrolytes between two identical planar electrodes of finite thickness. A self-similar behavior was identified for the electric double-layer integral capacitance estimated from CV measurement simulations. Two regimes were identified by comparing the half cycle period τ_{CV} and the “RC time scale” τ_{RC} corresponding to the characteristic time of ions’ electrodiffusion. For τ_{RC} ← τ_{CV}, quasi-equilibrium conditions prevailed and the capacitance was diffusion-independent while for τ_{RC} → τ_{CV}, the capacitance was diffusion-limited. The effect of the electrode was captured by the dimensionless electrode electrical conductivity representing the ratio of characteristic times associated with charge transport in the electrolyte and that in the electrode. The model developed here will be useful for simulating and designing various practical electrochemical, colloidal, and biological systems for a wide range of applications.
A white-box model of S-shaped and double S-shaped single-species population growth.
Kalmykov, Lev V; Kalmykov, Vyacheslav L
2015-01-01
Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka-Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems.
A white-box model of S-shaped and double S-shaped single-species population growth
Kalmykov, Lev V.
2015-01-01
Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717
A white-box model of S-shaped and double S-shaped single-species population growth.
Kalmykov, Lev V; Kalmykov, Vyacheslav L
2015-01-01
Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka-Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717
Testing of and model development for double-walled thermal tubular
Satchwell, R.M.; Johnson, L.A. Jr.
1992-08-01
Insulated tubular products have become essential for use in steam injection projects. In a steam injection project, steam is created at the surface by either steam boilers or generators. During this process, steam travels from a boiler through surface lines to the wellhead, down the wellbore to the sandface, and into the reservoir. For some projects to be an economic success, cost must be reduced and oil recoveries must be increased by reducing heat losses in the wellbore. With reduced heats losses, steam generation costs are lowered and higher quality steam can be injected into the formation. To address this need, work under this project consisted of the design and construction of a thermal flow loop, testing a double-walled tubular product that was manufactured by Inter-Mountain Pipe Company, and the development and verification of a thermal hydraulic numerical simulator for steam injection. Four different experimental configurations of the double-walled pipe were tested. These configurations included: (1) bare pipe case, (2) bare pipe case with an applied annular vacuum, (3) insulated annular pipe case, and (4) insulated annular pipe case with an applied annular vacuum. Both the pipe body and coupling were tested with each configuration. The results of the experimental tests showed that the Inter-Mountain Pipe Company double-walled pipe body achieved a 98 percent reduction in heat loss when insulation was applied to the annular portion of the pipe. The application of insulation to the annular portion of the coupling reduced the heat losses by only 6 percent. In tests that specified the use of a vacuum in the annular portion of the pipe, leaks were detected and the vacuum could not be held.
Double-blind evaluation of the DKL LifeGuard Model 2
Murray, D.W.; Spencer, F.W.; Spencer, D.D.
1998-05-01
On March 20, 1998, Sandia National Laboratories performed a double-blind test of the DKL LifeGuard human presence detector and tracker. The test was designed to allow the device to search for individuals well within the product`s published operational parameters. The Test Operator of the DKL LifeGuard was provided by the manufacturer and was a high-ranking member of DKL management. The test was developed and implemented to verify the performance of the device as specified by the manufacturer. The device failed to meet its published specifications and it performed no better than random chance.
Sources of DNA Double-Strand Breaks and Models of Recombinational DNA Repair
Mehta, Anuja; Haber, James E.
2014-01-01
DNA is subject to many endogenous and exogenous insults that impair DNA replication and proper chromosome segregation. DNA double-strand breaks (DSBs) are one of the most toxic of these lesions and must be repaired to preserve chromosomal integrity. Eukaryotes are equipped with several different, but related, repair mechanisms involving homologous recombination, including single-strand annealing, gene conversion, and break-induced replication. In this review, we highlight the chief sources of DSBs and crucial requirements for each of these repair processes, as well as the methods to identify and study intermediate steps in DSB repair by homologous recombination. PMID:25104768
Boore, David M.; Di Alessandro, Carola; Abrahamson, Norman A.
2014-01-01
The stochastic method of simulating ground motions requires the specification of the shape and scaling with magnitude of the source spectrum. The spectral models commonly used are either single-corner-frequency or double-corner-frequency models, but the latter have no flexibility to vary the high-frequency spectral levels for a specified seismic moment. Two generalized double-corner-frequency ω2 source spectral models are introduced, one in which two spectra are multiplied together, and another where they are added. Both models have a low-frequency dependence controlled by the seismic moment, and a high-frequency spectral level controlled by the seismic moment and a stress parameter. A wide range of spectral shapes can be obtained from these generalized spectral models, which makes them suitable for inversions of data to obtain spectral models that can be used in ground-motion simulations in situations where adequate data are not available for purely empirical determinations of ground motions, as in stable continental regions. As an example of the use of the generalized source spectral models, data from up to 40 stations from seven events, plus response spectra at two distances and two magnitudes from recent ground-motion prediction equations, were inverted to obtain the parameters controlling the spectral shapes, as well as a finite-fault factor that is used in point-source, stochastic-method simulations of ground motion. The fits to the data are comparable to or even better than those from finite-fault simulations, even for sites close to large earthquakes.
Rohmer, Thierry; Lang, Christina; Gärtner, Wolfgang; Hughes, Jon; Matysik, Jörg
2010-01-01
Difference patterns of (13)C NMR chemicals shifts for the protonation of a free model compound in organic solution, as reported in the literature (M. Stanek, K. Grubmayr [1998] Chem. Eur. J.4, 1653-1659), were compared with changes in the protonation state occurring during holophytochrome assembly from phycocyanobilin (PCB) and the apoprotein. Both processes induce identical changes in the NMR signals, indicating that the assembly process is linked to protonation of the chromophore, yielding a cationic cofactor in a heterogeneous, quasi-liquid protein environment. The identity of both difference patterns implies that the protonation of a model compound in solution causes a partial stretching of the geometry of the macrocycle as found in the protein. In fact, the similarity of the difference pattern within the bilin family for identical chemical transformations represents a basis for future theoretical analysis. On the other hand, the change of the (13)C NMR chemical shift pattern upon the Pr --> Pfr photoisomerization is very different to that of the free model compound upon ZZZ --> ZZE photoisomerization. Hence, the character of the double-bond isomerization in phytochrome is essentially different from that of a classical photoinduced double-bond isomerization, emphasizing the role of the protein environment in the modulation of this light-induced process. PMID:20492561
Cheltsov, I A
2004-10-31
For a singular double cover of P{sup 3} ramified in a sextic with double line, its birational maps into Fano 3-folds with canonical singularities, elliptic fibrations, and fibrations on surfaces of Kodaira dimension zero are described.
NASA Astrophysics Data System (ADS)
Cheltsov, I. A.
2004-10-01
For a singular double cover of \\mathbb P^3 ramified in a sextic with double line, its birational maps into Fano 3-folds with canonical singularities, elliptic fibrations, and fibrations on surfaces of Kodaira dimension zero are described.
Schuurs, A H B; van Loveren, C
2002-04-01
Double teeth are not really rare, but it is still enigmatic why and how they develop. Based upon the clinical, morphological and anatomical appearance and the number of teeth in mouths with double teeth, the double teeth are labelled as products of 'fusion' and 'clefting', but the criteria to attach such etiological names are lacking. It is assumed that heredity is involved in the development of double teeth. Therefore it is attempted to explain why only one of a homozygotic twin had a double tooth. PMID:11982209
Tosun, Ismail
2012-03-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.
Tosun, İsmail
2012-01-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177
Sharples, Adam P; Al-Shanti, Nasser; Lewis, Mark P; Stewart, Claire E
2011-12-01
Ageing skeletal muscle displays declines in size, strength, and functional capacity. Given the acknowledged role that the systemic environment plays in reduced regeneration (Conboy et al. [2005] Nature 433: 760-764), the role of resident satellite cells (termed myoblasts upon activation) is relatively dismissed, where, multiple cellular divisions in-vivo throughout the lifespan could also impact on muscular deterioration. Using a model of multiple population doublings (MPD) in-vitro thus provided a system in which to investigate the direct impact of extensive cell duplications on muscle cell behavior. C(2) C(12) mouse skeletal myoblasts (CON) were used fresh or following 58 population doublings (MPD). As a result of multiple divisions, reduced morphological and biochemical (creatine kinase, CK) differentiation were observed. Furthermore, MPD cells had significantly increased cells in the S and decreased cells in the G1 phases of the cell cycle versus CON, following serum withdrawal. These results suggest continued cycling rather than G1 exit and thus reduced differentiation (myotube atrophy) occurs in MPD muscle cells. These changes were underpinned by significant reductions in transcript expression of: IGF-I and myogenic regulatory factors (myoD and myogenin) together with elevated IGFBP5. Signaling studies showed that decreased differentiation in MPD was associated with decreased phosphorylation of Akt, and with later increased phosphorylation of JNK1/2. Chemical inhibition of JNK1/2 (SP600125) in MPD cells increased IGF-I expression (non-significantly), however, did not enhance differentiation. This study provides a potential model and molecular mechanisms for deterioration in differentiation capacity in skeletal muscle cells as a consequence of multiple population doublings that would potentially contribute to the ageing process. PMID:21826704
de Roos, Albert DG
2007-01-01
Background It is generally believed that life first evolved from single-stranded RNA (ssRNA) that both stored genetic information and catalyzed the reactions required for self-replication. Presentation of the hypothesis By modeling early genome evolution on the engineering paradigm design-by-contract, an alternative scenario is presented in which life started with the appearance of double-stranded RNA (dsRNA) as an informational storage molecule while catalytic single-stranded RNA was derived from this dsRNA template later in evolution. Testing the hypothesis It was investigated whether this scenario could be implemented mechanistically by starting with abiotic processes. Double-stranded RNA could be formed abiotically by hybridization of oligoribonucleotides that are subsequently non-enzymatically ligated into a double-stranded chain. Thermal cycling driven by the diurnal temperature cycles could then replicate this dsRNA when strands of dsRNA separate and later rehybridize and ligate to reform dsRNA. A temperature-dependent partial replication of specific regions of dsRNA could produce the first template-based generation of catalytic ssRNA, similar to the developmental gene transcription process. Replacement of these abiotic processes by enzymatic processes would guarantee functional continuity. Further transition from a dsRNA to a dsDNA world could be based on minor mutations in template and substrate recognition sites of an RNA polymerase and would leave all existing processes intact. Implications of the hypothesis Modeling evolution on a design pattern, the 'dsRNA first' hypothesis can provide an alternative mechanistic evolutionary scenario for the origin of our genome that preserves functional continuity. Reviewers This article was reviewed by Anthony Poole, Eugene Koonin and Eugene Shakhnovich PMID:17466073
Stacchiotti, Alessandra; Favero, Gaia; Giugno, Lorena; Lavazza, Antonio; Reiter, Russel J; Rodella, Luigi Fabrizio; Rezzani, Rita
2014-01-01
Obesity is a common and complex health problem, which impacts crucial organs; it is also considered an independent risk factor for chronic kidney disease. Few studies have analyzed the consequence of obesity in the renal proximal convoluted tubules, which are the major tubules involved in reabsorptive processes. For optimal performance of the kidney, energy is primarily provided by mitochondria. Melatonin, an indoleamine and antioxidant, has been identified in mitochondria, and there is considerable evidence regarding its essential role in the prevention of oxidative mitochondrial damage. In this study we evaluated the mechanism(s) of mitochondrial alterations in an animal model of obesity (ob/ob mice) and describe the beneficial effects of melatonin treatment on mitochondrial morphology and dynamics as influenced by mitofusin-2 and the intrinsic apoptotic cascade. Melatonin dissolved in 1% ethanol was added to the drinking water from postnatal week 5-13; the calculated dose of melatonin intake was 100 mg/kg body weight/day. Compared to control mice, obesity-related morphological alterations were apparent in the proximal tubules which contained round mitochondria with irregular, short cristae and cells with elevated apoptotic index. Melatonin supplementation in obese mice changed mitochondria shape and cristae organization of proximal tubules, enhanced mitofusin-2 expression, which in turn modulated the progression of the mitochondria-driven intrinsic apoptotic pathway. These changes possibly aid in reducing renal failure. The melatonin-mediated changes indicate its potential protective use against renal morphological damage and dysfunction associated with obesity and metabolic disease.
Convolution effect on TCR log response curve and the correction method for it
NASA Astrophysics Data System (ADS)
Chen, Q.; Liu, L. J.; Gao, J.
2016-09-01
Through-casing resistivity (TCR) logging has been successfully used in production wells for the dynamic monitoring of oil pools and the distribution of the residual oil, but its vertical resolution has limited its efficiency in identification of thin beds. The vertical resolution is limited by the distortion phenomenon of vertical response of TCR logging. The distortion phenomenon was studied in this work. It was found that the vertical response curve of TCR logging is the convolution of the true formation resistivity and the convolution function of TCR logging tool. Due to the effect of convolution, the measurement error at thin beds can reach 30% or even bigger. Thus the information of thin bed might be covered up very likely. The convolution function of TCR logging tool was obtained in both continuous and discrete way in this work. Through modified Lyle-Kalman deconvolution method, the true formation resistivity can be optimally estimated, so this inverse algorithm can correct the error caused by the convolution effect. Thus it can improve the vertical resolution of TCR logging tool for identification of thin beds.
NASA Astrophysics Data System (ADS)
Yu, C.; Xue, X.; Dou, X.; Wu, J.
2015-12-01
The adjustment of gravity wave parameterization associated with model convection has made possible the spontaneous generation of the quasi-biennial oscillation (QBO) in the Whole Atmosphere Community Climate Model (WACCM 4.0), although there are some mismatching when compared with the observation. The parameterization is based on Lindzen's linear saturation theory which can better describe inertia-gravity waves (IGW) by taking the Coriolis effects into consideration. In this work we improve the parameterization by importing a more realistic double Gaussian distribution IGW spectrum, which is calculated from tropical radiosonde observations. A series of WACCM simulations are performed to determine the relationship between the period and amplitude of equatorial zonal wind oscillations and the feature of parameterized IGW. All of these simulations are capable of generating equatorial wind oscillations in the stratosphere using the standard spatial resolution settings. The period of the oscillation is associate inversely with the strength of the IGW forcing, but the central values of double Gaussian distribution IGW have influence both on the magnitude and period of the oscillation. In fact, the eastward and westward IGWs affect the amplitude of the QBO wind, respectively, and the strength of IGWs forcing determines the accelerating rate of the QBO wind. Furthermore, stronger forcing of IGWs can lead to a deeper propagate of the QBO phase, which can extend the lowest altitude of the constant zonal wind amplitudes to about 100 hPa.
NASA Astrophysics Data System (ADS)
Bende, Attila; Bogár, Ferenc; Ladik, János
2013-04-01
Using the Hartree-Fock crystal orbital method band structures of poly(G˜-C˜) and poly(A˜-T˜) were calculated (G˜, etc. means a nucleotide) including water molecules and Na+ ions. Due to the close packing of DNA in the ribosomes the motion of the double helix and the water molecules around it are strongly restricted, therefore the band picture can be used. The mobilities were calculated from the highest filled bands. The hole mobilities increase with decreasing temperatures. They are of the same order of magnitude as those of poly(A˜) and poly(T˜). For poly(G˜) the result is ˜5 times larger than in the poly(G˜-C˜) case.
Model-free test of local-density mean-field behavior in electric double layers
NASA Astrophysics Data System (ADS)
Giera, Brian; Henson, Neil; Kober, Edward M.; Squires, Todd M.; Shell, M. Scott
2013-07-01
We derive a self-similarity criterion that must hold if a planar electric double layer (EDL) can be captured by a local-density approximation (LDA), without specifying any specific LDA. Our procedure generates a similarity coordinate from EDL profiles (measured or computed), and all LDA EDL profiles for a given electrolyte must collapse onto a master curve when plotted against this similarity coordinate. Noncollapsing profiles imply the inability of any LDA theory to capture EDLs in that electrolyte. We demonstrate our approach with molecular simulations, which reveal dilute electrolytes to collapse onto a single curve, and semidilute ions to collapse onto curves specific to each electrolyte, except where size-induced correlations arise.
Two Fermions in a Double Well: Exploring a Fundamental Building Block of the Hubbard Model
NASA Astrophysics Data System (ADS)
Murmann, Simon; Bergschneider, Andrea; Klinkhamer, Vincent M.; Zürn, Gerhard; Lompe, Thomas; Jochim, Selim
2015-02-01
We have prepared two ultracold fermionic atoms in an isolated double-well potential and obtained full control over the quantum state of this system. In particular, we can independently control the interaction strength between the particles, their tunneling rate between the wells and the tilt of the potential. By introducing repulsive (attractive) interparticle interactions we have realized the two-particle analog of a Mott-insulating (charge-density-wave) state. We have also spectroscopically observed how second-order tunneling affects the energy of the system. This work realizes the first step of a bottom-up approach to deterministically create a single-site addressable realization of a ground-state Fermi-Hubbard system.
Sabtaji, Agung E-mail: agung.sabtaji@bmkg.go.id; Nugraha, Andri Dian
2015-04-24
West Papua region has fairly high of seismicity activities due to tectonic setting and many inland faults. In addition, the region has a unique and complex tectonic conditions and this situation lead to high potency of seismic hazard in the region. The precise earthquake hypocenter location is very important, which could provide high quality of earthquake parameter information and the subsurface structure in this region to the society. We conducted 1-D P-wave velocity using earthquake data catalog from BMKG for April, 2009 up to March, 2014 around West Papua region. The obtained 1-D seismic velocity then was used as input for improving hypocenter location using double-difference method. The relocated hypocenter location shows fairly clearly the pattern of intraslab earthquake beneath New Guinea Trench (NGT). The relocated hypocenters related to the inland fault are also observed more focus in location around the fault.
NASA Technical Reports Server (NTRS)
Hyer, M. W.
1980-01-01
The determination of the stress distribution in the inner lap of double-lap, double-bolt joints using photoelastic models of the joint is discussed. The principal idea is to fabricate the inner lap of a photoelastic material and to use a photoelastically sensitive material for the two outer laps. With this setup, polarized light transmitted through the stressed model responds principally to the stressed inner lap. The model geometry, the procedures for making and testing the model, and test results are described.
DOUBLE SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION
OGDEN DM; KIRCH NW
2007-10-31
This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed.
NASA Astrophysics Data System (ADS)
Chiang, T. K.; Chen, M. L.
2007-03-01
Based on the fully two-dimensional (2D) Poisson's solution in both silicon film and insulator layer, a compact and analytical threshold voltage model, which accounts for the fringing field effect of the short channel symmetrical double-gate (SDG) MOSFETs, has been developed. Exploiting the new model, a concerned analysis combining FIBL-enhanced short-channel effects and high- k gate dielectrics assess their overall impact on SDG MOSFET's scaling. It is found that for the same equivalent oxide thickness, the gate insulator with high- k dielectric constant which keeps a great characteristic length allows less design space than SiO 2 to sustain the same FIBL induced threshold voltage degradation.
NASA Astrophysics Data System (ADS)
Niu, Yong; Su, Weiguo
2016-06-01
A line spring model is developed for analyzing the fracture problem of cracked metallic plate repaired with the double-sided adhesively bonded composite patch. The restraining action of the bonded patch is modeled as continuous distributed linear springs bridging the crack faces provided that the cracked plate is subjected to extensional load. The effective spring constant is determined from 1-D bonded joint theory. The hyper-singular integral equation (HSIE), which can be solved using the second kind Chebyshev polynomial expansion method, is applied to determine the crack opening displacements (COD) and the crack tip stress intensity factors (SIF) of the repaired cracked plate. The numerical result of SIF for the crack-tip correlates very well with the finite element (FE) computations based on the virtual crack closure technique (VCCT). The present analysis approaches and mathematical techniques are critical to the successful design, analysis and implementation of crack patching.
Disrupted bandcount doubling in an AC-DC boost PFC circuit modeled by a time varying map
NASA Astrophysics Data System (ADS)
Avrutin, Viktor; Zhusubaliyev, Zhanybai T.; El Aroudi, Abdelali; Fournier-Prunaret, Danièle; Garcia, Germain; Mosekilde, Erik
2016-02-01
Power factor correction converters are used in many applications as AC-DC power supplies aiming at maintaining a near unity power factor. Systems of this type are known to exhibit nonlinear phenomena such as sub-harmonic oscillations and chaotic regimes that cannot be described by traditional averaged models. In this paper, we derive a time varying discretetime map modeling the behavior of a power factor correction AC-DC boost converter. This map is derived in closed-form and is able to faithfully reproduce the system behavior under realistic conditions. In the chaotic regime the map exhibits a sequence of bifurcation similar to a bandcount doubling cascade on the low frequency. However, the observed scenario appears in some sense incomplete, with some gaps in the bifurcation diagram, whose appearance to our knowledge has never been reported before. We show that these gaps are caused by high frequency oscillations.
Li, Haiyan; He, Yijun; Wang, Wenguang
2009-01-01
The convolution between co-polarization amplitude only data is studied to improve ship detection performance. The different statistical behaviors of ships and surrounding ocean are characterized a by two-dimensional convolution function (2D-CF) between different polarization channels. The convolution value of the ocean decreases relative to initial data, while that of ships increases. Therefore the contrast of ships to ocean is increased. The opposite variation trend of ocean and ships can distinguish the high intensity ocean clutter from ships' signatures. The new criterion can generally avoid mistaken detection by a constant false alarm rate detector. Our new ship detector is compared with other polarimetric approaches, and the results confirm the robustness of the proposed method.
Wu, Xuecheng; Wu, Yingchun; Yang, Jing; Wang, Zhihua; Zhou, Binwu; Gréhan, Gérard; Cen, Kefa
2013-05-20
Application of the modified convolution method to reconstruct digital inline holography of particle illuminated by an elliptical Gaussian beam is investigated. Based on the analysis on the formation of particle hologram using the Collins formula, the convolution method is modified to compensate the astigmatism by adding two scaling factors. Both simulated and experimental holograms of transparent droplets and opaque particles are used to test the algorithm, and the reconstructed images are compared with that using FRFT reconstruction. Results show that the modified convolution method can accurately reconstruct the particle image. This method has an advantage that the reconstructed images in different depth positions have the same size and resolution with the hologram. This work shows that digital inline holography has great potential in particle diagnostics in curvature containers.
Xie, Tao; Qin, Zhi-Zhen; Zhou, Rui; Zhao, Ying; Du, Guan-hua
2015-04-01
A double targets of high throughput screening model for xanthine oxidase inhibitors and superoxide anion scavengers was established. In the reaction system of xanthine oxidase, WST-1 works as the probe for the ultra oxygen anion generation, and product uric acid works as xanthine oxidase activity indicator. By using SpectraMax M5 continuous spectrum enzyme sign reflectoscope reflector, the changes of these indicators' concentration were observed and the influence factors of this reaction system to establish the high throughput screening model were studied. And the model is confirmed by positive drugs. In the reaction system, the final volume of reaction system is 50 μL and the concentrations of xanthine oxidase is 4 mU x mL(-1), xanthine 250 μmol x L(-1) and WST-1 100 μmol x L(-1), separately. The Z'-factor of model for xanthine oxidase inhibitors is 0.537 4, S/N is 47.519 9; the Z'-factor of model for superoxide anion scavengers is 0.507 4, S/N is 5.388 9. This model for xanthine oxidase inhibitors and superoxide anion scavengers has more common characteristics of the good stability, the fewer reagent types and quantity, the good repeatability, and so on. And it can be widely applied in high-throughput screening research.
Perez-Carrasco, Jose Antonio; Acha, Begona; Serrano, Carmen; Camunas-Mesa, Luis; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2010-04-01
Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.
Punctured Parallel and Serial Concatenated Convolutional Codes for BPSK/QPSK Channels
NASA Technical Reports Server (NTRS)
Acikel, Omer Fatih
1999-01-01
As available bandwidth for communication applications becomes scarce, bandwidth-efficient modulation and coding schemes become ever important. Since their discovery in 1993, turbo codes (parallel concatenated convolutional codes) have been the center of the attention in the coding community because of their bit error rate performance near the Shannon limit. Serial concatenated convolutional codes have also been shown to be as powerful as turbo codes. In this dissertation, we introduce algorithms for designing bandwidth-efficient rate r = k/(k + 1),k = 2, 3,..., 16, parallel and rate 3/4, 7/8, and 15/16 serial concatenated convolutional codes via puncturing for BPSK/QPSK (Binary Phase Shift Keying/Quadrature Phase Shift Keying) channels. Both parallel and serial concatenated convolutional codes have initially, steep bit error rate versus signal-to-noise ratio slope (called the -"cliff region"). However, this steep slope changes to a moderate slope with increasing signal-to-noise ratio, where the slope is characterized by the weight spectrum of the code. The region after the cliff region is called the "error rate floor" which dominates the behavior of these codes in moderate to high signal-to-noise ratios. Our goal is to design high rate parallel and serial concatenated convolutional codes while minimizing the error rate floor effect. The design algorithm includes an interleaver enhancement procedure and finds the polynomial sets (only for parallel concatenated convolutional codes) and the puncturing schemes that achieve the lowest bit error rate performance around the floor for the code rates of interest.
ERIC Educational Resources Information Center
Cepeda-Cuervo, Edilberto; Núñez-Antón, Vicente
2013-01-01
In this article, a proposed Bayesian extension of the generalized beta spatial regression models is applied to the analysis of the quality of education in Colombia. We briefly revise the beta distribution and describe the joint modeling approach for the mean and dispersion parameters in the spatial regression models' setting. Finally, we…
Gieseking, Rebecca L; Ratner, Mark A; Schatz, George C
2016-07-01
Quantum mechanical studies of Ag nanoclusters have shown that plasmonic behavior can be modeled in terms of excited states where collectivity among single excitations leads to strong absorption. However, new computational approaches are needed to provide understanding of plasmonic excitations beyond the single-excitation level. We show that semiempirical INDO/CI approaches with appropriately selected parameters reproduce the TD-DFT optical spectra of various closed-shell Ag clusters. The plasmon-like states with strong optical absorption comprise linear combinations of many singly excited configurations that contribute additively to the transition dipole moment, whereas all other excited states show significant cancellation among the contributions to the transition dipole moment. The computational efficiency of this approach allows us to investigate the role of double excitations at the INDO/SDCI level. The Ag cluster ground states are stabilized by slight mixing with doubly excited configurations, but the plasmonic states generally retain largely singly excited character. The consideration of double excitations in all cases improves the agreement of the INDO/CI absorption spectra with TD-DFT, suggesting that the SDCI calculation effectively captures some of the ground-state correlation implicit in DFT. These results provide the first evidence to support the commonly used assumption that single excitations are in many cases sufficient to describe the optical spectra of plasmonic excitations quantum mechanically.
Davis, S Lindsey; Robertson, Kelli M; Pitts, Todd M; Tentler, John J; Bradshaw-Pierce, Erica L; Klauck, Peter J; Bagby, Stacey M; Hyatt, Stephanie L; Selby, Heather M; Spreafico, Anna; Ecsedy, Jeffrey A; Arcaroli, John J; Messersmith, Wells A; Tan, Aik Choon; Eckhardt, S Gail
2015-01-01
Aurora A kinase and MEK inhibitors induce different, and potentially complementary, effects on the cell cycle of malignant cells, suggesting a rational basis for utilizing these agents in combination. In this work, the combination of an Aurora A kinase and MEK inhibitor was evaluated in pre-clinical colorectal cancer models, with a focus on identifying a subpopulation in which it might be most effective. Increased synergistic activity of the drug combination was identified in colorectal cancer cell lines with concomitant KRAS and PIK3CA mutations. Anti-proliferative effects were observed upon treatment of these double-mutant cell lines with the drug combination, and tumor growth inhibition was observed in double-mutant human tumor xenografts, though effects were variable within this subset. Additional evaluation suggests that degree of G2/M delay and p53 mutation status affect apoptotic activity induced by combination therapy with an Aurora A kinase and MEK inhibitor in KRAS and PIK3CA mutant colorectal cancer. Overall, in vitro and in vivo testing was unable to identify a subset of colorectal cancer that was consistently responsive to the combination of a MEK and Aurora A kinase inhibitor. PMID:26136684
Combined inhibition of MEK and Aurora A kinase in KRAS/PIK3CA double-mutant colorectal cancer models
Davis, S. Lindsey; Robertson, Kelli M.; Pitts, Todd M.; Tentler, John J.; Bradshaw-Pierce, Erica L.; Klauck, Peter J.; Bagby, Stacey M.; Hyatt, Stephanie L.; Selby, Heather M.; Spreafico, Anna; Ecsedy, Jeffrey A.; Arcaroli, John J.; Messersmith, Wells A.; Tan, Aik Choon; Eckhardt, S. Gail
2015-01-01
Aurora A kinase and MEK inhibitors induce different, and potentially complementary, effects on the cell cycle of malignant cells, suggesting a rational basis for utilizing these agents in combination. In this work, the combination of an Aurora A kinase and MEK inhibitor was evaluated in pre-clinical colorectal cancer models, with a focus on identifying a subpopulation in which it might be most effective. Increased synergistic activity of the drug combination was identified in colorectal cancer cell lines with concomitant KRAS and PIK3CA mutations. Anti-proliferative effects were observed upon treatment of these double-mutant cell lines with the drug combination, and tumor growth inhibition was observed in double-mutant human tumor xenografts, though effects were variable within this subset. Additional evaluation suggests that degree of G2/M delay and p53 mutation status affect apoptotic activity induced by combination therapy with an Aurora A kinase and MEK inhibitor in KRAS and PIK3CA mutant colorectal cancer. Overall, in vitro and in vivo testing was unable to identify a subset of colorectal cancer that was consistently responsive to the combination of a MEK and Aurora A kinase inhibitor. PMID:26136684