Domoshnitsky, Alexander; Maghakyan, Abraham; Berezansky, Leonid
2017-01-01
In this paper a method for studying stability of the equation [Formula: see text] not including explicitly the first derivative is proposed. We demonstrate that although the corresponding ordinary differential equation [Formula: see text] is not exponentially stable, the delay equation can be exponentially stable.
Rotating flow of a nanofluid due to an exponentially stretching surface with suction
NASA Astrophysics Data System (ADS)
Salleh, Siti Nur Alwani; Bachok, Norfifah; Arifin, Norihan Md
2017-08-01
An analysis of the rotating nanofluid flow past an exponentially stretched surface with the presence of suction is studied in this work. Three different types of nanoparticles, namely, copper, titania and alumina are considered. The system of ordinary differential equations is computed numerically using a shooting method in Maple software after being transformed from the partial differential equations. This transformation has considered the similarity transformations in exponential form. The physical effect of the rotation, suction and nanoparticle volume fraction parameters on the rotating flow and heat transfer phenomena is investigated and has been described in detail through graphs. The dual solutions are found to appear when the governing parameters reach a certain range.
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
Bishai, David; Opuni, Marjorie
2009-01-01
Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry
Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.
2014-01-01
Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.
One-way transformation of information
Cooper, James A.
1989-01-01
Method and apparatus are provided for one-way transformation of data according to multiplication and/or exponentiation modulo a prime number. An implementation of the invention permits the one way residue transformation, useful in encryption and similar applications, to be implemented by n-bit computers substantially with no increase in difficulty or complexity over a natural transformation thereby, using a modulus which is a power of two.
NASA Astrophysics Data System (ADS)
Shaharuz Zaman, Azmanira; Aziz, Ahmad Sukri Abd; Ali, Zaileha Md
2017-09-01
The double slips effect on the magnetohydrodynamic boundary layer flow over an exponentially stretching sheet with suction/blowing, radiation, chemical reaction and heat source is presented in this analysis. By using the similarity transformation, the governing partial differential equations of momentum, energy and concentration are transformed into the non-linear ordinary equations. These equations are solved using Runge-Kutta-Fehlberg method with shooting technique in MAPLE software environment. The effects of the various parameter on the velocity, temperature and concentration profiles are graphically presented and discussed.
Improved technique for one-way transformation of information
Cooper, J.A.
1987-05-11
Method and apparatus are provided for one-way transformation of data according to multiplication and/or exponentiation modulo a prime number. An implementation of the invention permits the one way residue transformation, useful in encryption and similar applications, to be implemented by n-bit computers substantially with no increase in difficulty or complexity over a natural transformation thereby, using a modulus which is a power of two. 9 figs.
NASA Astrophysics Data System (ADS)
Liao, Feng; Zhang, Luming; Wang, Shanshan
2018-02-01
In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.
1987-09-01
inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in
Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warshaw, S I
2001-07-15
In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less
Similarity-transformed equation-of-motion vibrational coupled-cluster theory.
Faucheaux, Jacob A; Nooijen, Marcel; Hirata, So
2018-02-07
A similarity-transformed equation-of-motion vibrational coupled-cluster (STEOM-XVCC) method is introduced as a one-mode theory with an effective vibrational Hamiltonian, which is similarity transformed twice so that its lower-order operators are dressed with higher-order anharmonic effects. The first transformation uses an exponential excitation operator, defining the equation-of-motion vibrational coupled-cluster (EOM-XVCC) method, and the second uses an exponential excitation-deexcitation operator. From diagonalization of this doubly similarity-transformed Hamiltonian in the small one-mode excitation space, the method simultaneously computes accurate anharmonic vibrational frequencies of all fundamentals, which have unique significance in vibrational analyses. We establish a diagrammatic method of deriving the working equations of STEOM-XVCC and prove their connectedness and thus size-consistency as well as the exact equality of its frequencies with the corresponding roots of EOM-XVCC. We furthermore elucidate the similarities and differences between electronic and vibrational STEOM methods and between STEOM-XVCC and vibrational many-body Green's function theory based on the Dyson equation, which is also an anharmonic one-mode theory. The latter comparison inspires three approximate STEOM-XVCC methods utilizing the common approximations made in the Dyson equation: the diagonal approximation, a perturbative expansion of the Dyson self-energy, and the frequency-independent approximation. The STEOM-XVCC method including up to the simultaneous four-mode excitation operator in a quartic force field and its three approximate variants are formulated and implemented in computer codes with the aid of computer algebra, and they are applied to small test cases with varied degrees of anharmonicity.
Similarity-transformed equation-of-motion vibrational coupled-cluster theory
NASA Astrophysics Data System (ADS)
Faucheaux, Jacob A.; Nooijen, Marcel; Hirata, So
2018-02-01
A similarity-transformed equation-of-motion vibrational coupled-cluster (STEOM-XVCC) method is introduced as a one-mode theory with an effective vibrational Hamiltonian, which is similarity transformed twice so that its lower-order operators are dressed with higher-order anharmonic effects. The first transformation uses an exponential excitation operator, defining the equation-of-motion vibrational coupled-cluster (EOM-XVCC) method, and the second uses an exponential excitation-deexcitation operator. From diagonalization of this doubly similarity-transformed Hamiltonian in the small one-mode excitation space, the method simultaneously computes accurate anharmonic vibrational frequencies of all fundamentals, which have unique significance in vibrational analyses. We establish a diagrammatic method of deriving the working equations of STEOM-XVCC and prove their connectedness and thus size-consistency as well as the exact equality of its frequencies with the corresponding roots of EOM-XVCC. We furthermore elucidate the similarities and differences between electronic and vibrational STEOM methods and between STEOM-XVCC and vibrational many-body Green's function theory based on the Dyson equation, which is also an anharmonic one-mode theory. The latter comparison inspires three approximate STEOM-XVCC methods utilizing the common approximations made in the Dyson equation: the diagonal approximation, a perturbative expansion of the Dyson self-energy, and the frequency-independent approximation. The STEOM-XVCC method including up to the simultaneous four-mode excitation operator in a quartic force field and its three approximate variants are formulated and implemented in computer codes with the aid of computer algebra, and they are applied to small test cases with varied degrees of anharmonicity.
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet
NASA Astrophysics Data System (ADS)
Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan
2017-04-01
The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hounkonnou, Mahouton Norbert; Nkouankam, Elvis Benzo Ngompe
2010-10-15
From the realization of q-oscillator algebra in terms of generalized derivative, we compute the matrix elements from deformed exponential functions and deduce generating functions associated with Rogers-Szego polynomials as well as their relevant properties. We also compute the matrix elements associated with the (p,q)-oscillator algebra (a generalization of the q-one) and perform the Fourier-Gauss transform of a generalization of the deformed exponential functions.
Finite Nilpotent BRST Transformations in Hamiltonian Formulation
NASA Astrophysics Data System (ADS)
Rai, Sumit Kumar; Mandal, Bhabani Prasad
2013-10-01
We consider the finite field dependent BRST (FFBRST) transformations in the context of Hamiltonian formulation using Batalin-Fradkin-Vilkovisky method. The non-trivial Jacobian of such transformations is calculated in extended phase space. The contribution from Jacobian can be written as exponential of some local functional of fields which can be added to the effective Hamiltonian of the system. Thus, FFBRST in Hamiltonian formulation with extended phase space also connects different effective theories. We establish this result with the help of two explicit examples. We also show that the FFBRST transformations is similar to the canonical transformations in the sector of Lagrange multiplier and its corresponding momenta.
NASA Astrophysics Data System (ADS)
Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin
2015-03-01
Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.
Glick, S J; Hawkins, W G; King, M A; Penney, B C; Soares, E J; Byrne, C L
1992-01-01
The application of stationary restoration techniques to SPECT images assumes that the modulation transfer function (MTF) of the imaging system is shift invariant. It was hypothesized that using intrinsic attenuation correction (i.e., methods which explicitly invert the exponential radon transform) would yield a three-dimensional (3-D) MTF which varies less with position within the transverse slices than the combined conjugate view two-dimensional (2-D) MTF varies with depth. Thus the assumption of shift invariance would become less of an approximation for 3-D post- than for 2-D pre-reconstruction restoration filtering. SPECT acquisitions were obtained from point sources located at various positions in three differently shaped, water-filled phantoms. The data were reconstructed with intrinsic attenuation correction, and 3-D MTFs were calculated. Four different intrinsic attenuation correction methods were compared: (1) exponentially weighted backprojection, (2) a modified exponentially weighted backprojection as described by Tanaka et al. [Phys. Med. Biol. 29, 1489-1500 (1984)], (3) a Fourier domain technique as described by Bellini et al. [IEEE Trans. ASSP 27, 213-218 (1979)], and (4) the circular harmonic transform (CHT) method as described by Hawkins et al. [IEEE Trans. Med. Imag. 7, 135-148 (1988)]. The dependence of the 3-D MTF obtained with these methods, on point source location within an attenuator, and on shape of the attenuator, was studied. These 3-D MTFs were compared to: (1) those MTFs obtained with no attenuation correction, and (2) the depth dependence of the arithmetic mean combined conjugate view 2-D MTFs.(ABSTRACT TRUNCATED AT 250 WORDS)
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
Image reconstruction from cone-beam projections with attenuation correction
NASA Astrophysics Data System (ADS)
Weng, Yi
1997-07-01
In single photon emission computered tomography (SPECT) imaging, photon attenuation within the body is a major factor contributing to the quantitative inaccuracy in measuring the distribution of radioactivity. Cone-beam SPECT provides improved sensitivity for imaging small organs. This thesis extends the results for 2D parallel- beam and fan-beam geometry to 3D parallel-beam and cone- beam geometries in order to derive filtered backprojection reconstruction algorithms for the 3D exponential parallel-beam transform and for the exponential cone-beam transform with sampling on a sphere. An exact inversion formula for the 3D exponential parallel-beam transform is obtained and is extended to the 3D exponential cone-beam transform. Sampling on a sphere is not useful clinically and current cone-beam tomography, with the focal point traversing a planar orbit, does not acquire sufficient data to give an accurate reconstruction. Thus a data acquisition method that obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix that surrounds the patient was developed. First, an implementation of Grangeat's algorithm for helical cone- beam projections was developed without attenuation correction. A fast new rebinning scheme was developed that uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. In the case of attenuation no theorem analogous to Tuy's has been proven. We hypothesized that an artifact-free reconstruction could be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. Cone-beam emission data were acquired by using a circle- and-line and a helix orbit on a clinical SPECT system. An iterative conjugate gradient reconstruction algorithm was used to reconstruct projection data with a known attenuation map. The quantitative accuracy of the attenuation-corrected emission reconstruction was significantly improved.
Slip Effects On MHD Three Dimensional Flow Of Casson Fluid Over An Exponentially Stretching Surface
NASA Astrophysics Data System (ADS)
Madhusudhana Rao, B.; Krishna Murthy, M.; Sivakumar, N.; Rushi Kumar, B.; Raju, C. S. K.
2018-04-01
Heat and mass transfer effects on MHD three dimensional flow of Casson fluid over an exponentially stretching surface with slip conditions is examined. The similarity transformations are used to convert the governing equations into a set of nonlinear ordinary differential equations and are solved numerically using fourth order Runge-Kutta method along with shooting technique. The effects of Casson parameter, Hartmann number, heat source/sink,chemical reaction and slip factors on velocity, temperature and concentration are shown graphically. The skin friction coefficient and the Nusselt number are examined numerically.
NASA Astrophysics Data System (ADS)
Isa, Siti Suzilliana Putri Mohamed; Arifin, Norihan Md.; Nazar, Roslinda; Bachok, Norfifah; Ali, Fadzilah Md
2017-12-01
A theoretical study that describes the magnetohydrodynamic mixed convection boundary layer flow with heat transfer over an exponentially stretching sheet with an exponential temperature distribution has been presented herein. This study is conducted in the presence of convective heat exchange at the surface and its surroundings. The system is controlled by viscous dissipation and internal heat generation effects. The governing nonlinear partial differential equations are converted into ordinary differential equations by a similarity transformation. The converted equations are then solved numerically using the shooting method. The results related to skin friction coefficient, local Nusselt number, velocity and temperature profiles are presented for several sets of values of the parameters. The effects of the governing parameters on the features of the flow and heat transfer are examined in detail in this study.
Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2010-01-01
How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.
Zhukovsky, K
2014-01-01
We present a general method of operational nature to analyze and obtain solutions for a variety of equations of mathematical physics and related mathematical problems. We construct inverse differential operators and produce operational identities, involving inverse derivatives and families of generalised orthogonal polynomials, such as Hermite and Laguerre polynomial families. We develop the methodology of inverse and exponential operators, employing them for the study of partial differential equations. Advantages of the operational technique, combined with the use of integral transforms, generating functions with exponentials and their integrals, for solving a wide class of partial derivative equations, related to heat, wave, and transport problems, are demonstrated.
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
Historical remarks on exponential product and quantum analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Masuo
2015-03-10
The exponential product formula [1, 2] was substantially introduced in physics by the present author [2]. Its systematic applications to quantum Monte Carlo Methods [3] were preformed [4, 5] first in 1977. Many interesting applications [6] of the quantum-classical correspondence (namely S-T transformation) have been reported. Systematic higher-order decomposition formulae were also discovered by the present author [7-11], using the recursion scheme [7, 9]. Physically speaking, these exponential product formulae play a conceptual role of separation of procedures [3,14]. Mathematical aspects of these formulae have been integrated in quantum analysis [15], in which non-commutative differential calculus is formulated and amore » general quantum Taylor expansion formula is given. This yields many useful operator expansion formulae such as the Feynman expansion formula and the resolvent expansion. Irreversibility and entropy production are also studied using quantum analysis [15].« less
Sparsity-based Poisson denoising with dictionary learning.
Giryes, Raja; Elad, Michael
2014-12-01
The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.
Stuebner, Michael; Haider, Mansoor A
2010-06-18
A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Scalability, Complexity and Reliability in Quantum Information Processing
2007-03-01
hidden subgroup framework to abelian groups which are not finitely generated. An extension of the basic algorithm breaks the Buchmann-Williams...finding short lattice vectors . In [2], we showed that the generalization of the standard method --- random coset state preparation followed by fourier...sampling --- required exponential time for sufficiently non-abelian groups including the symmetric group , at least when the fourier transforms are
NASA Astrophysics Data System (ADS)
Wang, Xiaoqiang; Ju, Lili; Du, Qiang
2016-07-01
The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.
On E-discretization of tori of compact simple Lie groups. II
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Juránek, Michal
2017-10-01
Ten types of discrete Fourier transforms of Weyl orbit functions are developed. Generalizing one-dimensional cosine, sine, and exponential, each type of the Weyl orbit function represents an exponential symmetrized with respect to a subgroup of the Weyl group. Fundamental domains of even affine and dual even affine Weyl groups, governing the argument and label symmetries of the even orbit functions, are determined. The discrete orthogonality relations are formulated on finite sets of points from the refinements of the dual weight lattices. Explicit counting formulas for the number of points of the discrete transforms are deduced. Real-valued Hartley orbit functions are introduced, and all ten types of the corresponding discrete Hartley transforms are detailed.
NASA Astrophysics Data System (ADS)
Shi, R.; Sun, Z.
2018-04-01
GF-3 synthetic aperture radar (SAR) images are rich in information and have obvious sparse features. However, the speckle appears in the GF-3 SAR images due to the coherent imaging system and it hinders the interpretation of images seriously. Recently, Shearlet is applied to the image processing with its best sparse representation. A new Shearlet-transform-based method is proposed in this paper based on the improved non-local means. Firstly, the logarithmic operation and the non-subsampled Shearlet transformation are applied to the GF-3 SAR image. Secondly, in order to solve the problems that the image details are smoothed overly and the weight distribution is affected by the speckle, a new non-local means is used for the transformed high frequency coefficient. Thirdly, the Shearlet reconstruction is carried out. Finally, the final filtered image is obtained by an exponential operation. Experimental results demonstrate that, compared with other despeckling methods, the proposed method can suppress the speckle effectively in homogeneous regions and has better capability of edge preserving.
Initial conditions and degrees of freedom of non-local gravity
NASA Astrophysics Data System (ADS)
Calcagni, Gianluca; Modesto, Leonardo; Nardelli, Giuseppe
2018-05-01
We prove the equivalence between non-local gravity with an arbitrary form factor and a non-local gravitational system with an extra rank-2 symmetric tensor. Thanks to this reformulation, we use the diffusion-equation method to transform the dynamics of renormalizable non-local gravity with exponential operators into a higher-dimensional system local in spacetime coordinates. This method, first illustrated with a scalar field theory and then applied to gravity, allows one to solve the Cauchy problem and count the number of initial conditions and of non-perturbative degrees of freedom, which is finite. In particular, the non-local scalar and gravitational theories with exponential operators are both characterized by four initial conditions in any dimension and, respectively, by one and eight degrees of freedom in four dimensions. The fully covariant equations of motion are written in a form convenient to find analytic non-perturbative solutions.
Time-domain full waveform inversion using instantaneous phase information with damping
NASA Astrophysics Data System (ADS)
Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun
2018-06-01
In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.
Rapid multi-modality preregistration based on SIFT descriptor.
Chen, Jian; Tian, Jie
2006-01-01
This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.
NASA Astrophysics Data System (ADS)
Rinzema, K.; Hoenders, B. J.; Ferwerda, H. A.
1997-07-01
We present a method to determine the back-reflected radiance from an isotropically scattering half-space with matched boundary. This method has the advantage that it leads very quickly to the relevant equations, the numerical solution of which is also quite easy. Essentially, the method is derived from a mathematical criterion that effectively forbids the existence of solutions to the transport equation which grow exponentially as one moves away from the surface and deeper into the medium. Preliminary calculations for infinitely wide beams yield results which agree very well with what is found in the literature.
NASA Astrophysics Data System (ADS)
Sagheer, M.; Bilal, M.; Hussain, S.; Ahmed, R. N.
2018-03-01
This article examines a mathematical model to analyze the rotating flow of three-dimensional water based nanofluid over a convectively heated exponentially stretching sheet in the presence of transverse magnetic field with additional effects of thermal radiation, Joule heating and viscous dissipation. Silver (Ag), copper (Cu), copper oxide (CuO), aluminum oxide (Al 2 O 3 ) and titanium dioxide (TiO 2 ) have been taken under consideration as the nanoparticles and water (H 2 O) as the base fluid. Using suitable similarity transformations, the governing partial differential equations (PDEs) of the modeled problem are transformed to the ordinary differential equations (ODEs). These ODEs are then solved numerically by applying the shooting method. For the particular situation, the results are compared with the available literature. The effects of different nanoparticles on the temperature distribution are also discussed graphically and numerically. It is witnessed that the skin friction coefficient is maximum for silver based nanofluid. Also, the velocity profile is found to diminish for the increasing values of the magnetic parameter.
2007-03-01
Quadrature QPSK Quadrature Phase-Shift Keying RV Random Variable SHAC Single-Hop-Observation Auto- Correlation SINR Signal-to-Interference...The fast Fourier transform ( FFT ) accumulation method and the strip spectral correlation algorithm subdivide the support region in the bi-frequency...diamond shapes, while the strip spectral correlation algorithm subdivides the region into strips. Each strip covers a number of the FFT accumulation
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Polar exponential sensor arrays unify iconic and Hough space representation
NASA Technical Reports Server (NTRS)
Weiman, Carl F. R.
1990-01-01
The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here.
New correction procedures for the fast field program which extend its range
NASA Technical Reports Server (NTRS)
West, M.; Sack, R. A.
1990-01-01
A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.
Atomic Gaussian type orbitals and their Fourier transforms via the Rayleigh expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yükçü, Niyazi
Gaussian type orbitals (GTOs), which are one of the types of exponential type orbitals (ETOs), are used usually as basis functions in the multi-center atomic and molecular integrals to better understand physical and chemical properties of matter. In the Fourier transform method (FTM), basis functions have not simplicity to make mathematical operations, but their Fourier transforms are easier to use. In this work, with the help of FTM, Rayleigh expansion and some properties of unnormalized GTOs, we present new mathematical results for the Fourier transform of GTOs in terms of Laguerre polynomials, hypergeometric and Whittaker functions. Physical and analytical propertiesmore » of GTOs are discussed and some numerical results have been given in a table. Finally, we compare our mathematical results with the other known literature results by using a computer program and details of evaluation are presented.« less
NASA Astrophysics Data System (ADS)
Qarib, Hossein; Adeli, Hojjat
2015-12-01
In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
Lorenzo, C F; Hartley, T T; Malti, R
2013-05-13
A new and simplified method for the solution of linear constant coefficient fractional differential equations of any commensurate order is presented. The solutions are based on the R-function and on specialized Laplace transform pairs derived from the principal fractional meta-trigonometric functions. The new method simplifies the solution of such fractional differential equations and presents the solutions in the form of real functions as opposed to fractional complex exponential functions, and thus is directly applicable to real-world physics.
NASA Astrophysics Data System (ADS)
Cuahutenango-Barro, B.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.
2017-12-01
Analytical solutions of the wave equation with bi-fractional-order and frictional memory kernel of Mittag-Leffler type are obtained via Caputo-Fabrizio fractional derivative in the Liouville-Caputo sense. Through the method of separation of variables and Laplace transform method we derive closed-form solutions and establish fundamental solutions. Special cases with homogeneous Dirichlet boundary conditions and nonhomogeneous initial conditions, as well as for the external force are considered. Numerical simulations of the special solutions were done and novel behaviors are obtained.
Nie, Zhen-yuan; Liu, Hong-chang; Xia, Jin-lan; Zhu, Hong-rui; Ma, Chen-yan; Zheng, Lei; Zhao, Yi-dong; Qiu, Guan-zhou
2014-10-01
The utilization of amorphous μ-S and orthorhombic α-S8 by thermoacidophile Sulfobacillus thermosulfidooxidans was firstly investigated in terms of cell growth and sulfur oxidation behavior. The morphology and surface sulfur speciation transformation were evaluated by using scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform-infrared spectroscopy (FT-IR), Raman spectroscopy and sulfur K-edge X-ray absorption near edge structure (XANES) spectroscopy. The results showed that the strain grown on μ-S entered slower (about 1 day later) into the exponential phase, while grew faster in exponential phase and attained higher maximal cell density and lower pH than on α-S8. After bio-corrosion, both sulfur samples were evidently eroded, but only μ-S surface presented much porosity, while α-S8 maintained glabrous. μ-S began to be gradually converted into α-S8 from day 2 when the bacterial cells entered the exponential phase, with a final composition of 62.3% μ-S and 37.7% α-S8 on day 4 at the stationary phase. α-S8 was not found to transform into other species in the experiments with or without bacteria. These data indicated S. thermosulfidooxidans oxidized amorphous μ-S faster than orthorhombic α-S8, but the chain-like μ-S was transformed into cyclic α-S8 by S. thermosulfidooxidans. Copyright © 2014 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved.
Determining XV-15 aeroelastic modes from flight data with frequency-domain methods
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.; Tischler, Mark B.
1993-01-01
The XV-15 tilt-rotor wing has six major aeroelastic modes that are close in frequency. To precisely excite individual modes during flight test, dual flaperon exciters with automatic frequency-sweep controls were installed. The resulting structural data were analyzed in the frequency domain (Fourier transformed). All spectral data were computed using chirp z-transforms. Modal frequencies and damping were determined by fitting curves to frequency-response magnitude and phase data. The results given in this report are for the XV-15 with its original metal rotor blades. Also, frequency and damping values are compared with theoretical predictions made using two different programs, CAMRAD and ASAP. The frequency-domain data-analysis method proved to be very reliable and adequate for tracking aeroelastic modes during flight-envelope expansion. This approach required less flight-test time and yielded mode estimations that were more repeatable, compared with the exponential-decay method previously used.
Repressing the effects of variable speed harmonic orders in operational modal analysis
NASA Astrophysics Data System (ADS)
Randall, R. B.; Coats, M. D.; Smith, W. A.
2016-10-01
Discrete frequency components such as machine shaft orders can disrupt the operation of normal Operational Modal Analysis (OMA) algorithms. With constant speed machines, they have been removed using time synchronous averaging (TSA). This paper compares two approaches for varying speed machines. In one method, signals are transformed into the order domain, and after the removal of shaft speed related components by a cepstral notching method, are transformed back to the time domain to allow normal OMA. In the other simpler approach an exponential shortpass lifter is applied directly in the time domain cepstrum to enhance the modal information at the expense of other disturbances. For simulated gear signals with speed variations of both ±5% and ±15%, the simpler approach was found to give better results The TSA method is shown not to work in either case. The paper compares the results with those obtained using a stationary random excitation.
A note on large gauge transformations in double field theory
Naseer, Usman
2015-06-03
Here, we give a detailed proof of the conjecture by Hohm and Zwiebach in double field theory. Our result implies that their proposal for large gauge transformations in terms of the Jacobian matrix for coordinate transformations is, as required, equivalent to the standard exponential map associated with the generalized Lie derivative along a suitable parameter.
Greene, Samuel M; Batista, Victor S
2017-09-12
We introduce the "tensor-train split-operator Fourier transform" (TT-SOFT) method for simulations of multidimensional nonadiabatic quantum dynamics. TT-SOFT is essentially the grid-based SOFT method implemented in dynamically adaptive tensor-train representations. In the same spirit of all matrix product states, the tensor-train format enables the representation, propagation, and computation of observables of multidimensional wave functions in terms of the grid-based wavepacket tensor components, bypassing the need of actually computing the wave function in its full-rank tensor product grid space. We demonstrate the accuracy and efficiency of the TT-SOFT method as applied to propagation of 24-dimensional wave packets, describing the S 1 /S 2 interconversion dynamics of pyrazine after UV photoexcitation to the S 2 state. Our results show that the TT-SOFT method is a powerful computational approach for simulations of quantum dynamics of polyatomic systems since it avoids the exponential scaling problem of full-rank grid-based representations.
NASA Astrophysics Data System (ADS)
Kamimoto, Shingo; Kawai, Takahiro; Koike, Tatsuya
2016-12-01
Inspired by the symbol calculus of linear differential operators of infinite order applied to the Borel transformed WKB solutions of simple-pole type equation [Kamimoto et al. (RIMS Kôkyûroku Bessatsu B 52:127-146, 2014)], which is summarized in Section 1, we introduce in Section 2 the space of simple resurgent functions depending on a parameter with an infra-exponential type growth order, and then we define the assigning operator A which acts on the space and produces resurgent functions with essential singularities. In Section 3, we apply the operator A to the Borel transforms of the Voros coefficient and its exponentiation for the Whittaker equation with a large parameter so that we may find the Borel transforms of the Voros coefficient and its exponentiation for the boosted Whittaker equation with a large parameter. In Section 4, we use these results to find the explicit form of the alien derivatives of the Borel transformed WKB solutions of the boosted Whittaker equation with a large parameter. The results in this paper manifest the importance of resurgent functions with essential singularities in developing the exact WKB analysis, the WKB analysis based on the resurgent function theory. It is also worth emphasizing that the concrete form of essential singularities we encounter is expressed by the linear differential operators of infinite order.
Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin
2010-08-01
This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.
Exponential Decay of Dispersion-Managed Solitons for General Dispersion Profiles
NASA Astrophysics Data System (ADS)
Green, William R.; Hundertmark, Dirk
2016-02-01
We show that any weak solution of the dispersion management equation describing dispersion-managed solitons together with its Fourier transform decay exponentially. This strong regularity result extends a recent result of Erdoğan, Hundertmark, and Lee in two directions, to arbitrary non-negative average dispersion and, more importantly, to rather general dispersion profiles, which cover most, if not all, physically relevant cases.
A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes
NASA Astrophysics Data System (ADS)
Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh
2018-01-01
Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.
The Movement to Transform High School. Forum Report
ERIC Educational Resources Information Center
Frey, Susan
2005-01-01
Although society has changed exponentially over the past 100 years, secondary schools have remained largely static, according to Gerald Hayward, who moderated EdSource's 28th Annual Forum, "Shaking up the Status Quo: The Movement to Transform High School," held in March 2005. Calling high schools difficult, complicated, and expensive,…
A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms
Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine
2010-01-01
Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. PMID:22163672
A rigid image registration based on the nonsubsampled contourlet transform and genetic algorithms.
Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine
2010-01-01
Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.
NASA Astrophysics Data System (ADS)
Khan, Najeeb Alam; Saeed, Umair Bin; Sultan, Faqiha; Ullah, Saif; Rehman, Abdul
2018-02-01
This study deals with the investigation of boundary layer flow of a fourth grade fluid and heat transfer over an exponential stretching sheet. For analyzing two heating processes, namely, (i) prescribed surface temperature (PST), and (ii) prescribed heat flux (PHF), the temperature distribution in a fluid has been considered. The suitable transformations associated with the velocity components and temperature, have been employed for reducing the nonlinear model equation to a system of ordinary differential equations. The flow and temperature fields are revealed by solving these reduced nonlinear equations through an effective analytical method. The important findings in this analysis are to observe the effects of viscoelastic, cross-viscous, third grade fluid, and fourth grade fluid parameters on the constructed analytical expression for velocity profile. Likewise, the heat transfer properties are studied for Prandtl and Eckert numbers.
Detecting metrologically useful asymmetry and entanglement by a few local measurements
NASA Astrophysics Data System (ADS)
Zhang, Chao; Yadin, Benjamin; Hou, Zhi-Bo; Cao, Huan; Liu, Bi-Heng; Huang, Yun-Feng; Maity, Reevu; Vedral, Vlatko; Li, Chuan-Feng; Guo, Guang-Can; Girolami, Davide
2017-10-01
Important properties of a quantum system are not directly measurable, but they can be disclosed by how fast the system changes under controlled perturbations. In particular, asymmetry and entanglement can be verified by reconstructing the state of a quantum system. Yet, this usually requires experimental and computational resources which increase exponentially with the system size. Here we show how to detect metrologically useful asymmetry and entanglement by a limited number of measurements. This is achieved by studying how they affect the speed of evolution of a system under a unitary transformation. We show that the speed of multiqubit systems can be evaluated by measuring a set of local observables, providing exponential advantage with respect to state tomography. Indeed, the presented method requires neither the knowledge of the state and the parameter-encoding Hamiltonian nor global measurements performed on all the constituent subsystems. We implement the detection scheme in an all-optical experiment.
NASA Astrophysics Data System (ADS)
Chen, Siyu; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin
2016-10-01
X-ray computed tomography (CT) has been extensively applied in industrial non-destructive testing (NDT). However, in practical applications, the X-ray beam polychromaticity often results in beam hardening problems for image reconstruction. The beam hardening artifacts, which manifested as cupping, streaks and flares, not only debase the image quality, but also disturb the subsequent analyses. Unfortunately, conventional CT scanning requires that the scanned object is completely covered by the field of view (FOV), the state-of-art beam hardening correction methods only consider the ideal scanning configuration, and often suffer problems for interior tomography due to the projection truncation. Aiming at this problem, this paper proposed a beam hardening correction method based on radon inversion transform for interior tomography. Experimental results show that, compared to the conventional correction algorithms, the proposed approach has achieved excellent performance in both beam hardening artifacts reduction and truncation artifacts suppression. Therefore, the presented method has vitally theoretic and practicable meaning in artifacts correction of industrial CT.
Comparative study on thermodynamic characteristics of AgCuZnSn brazing alloys
NASA Astrophysics Data System (ADS)
Wang, Xingxing; Li, Shuai; Peng, Jin
2018-01-01
AgCuZnSn brazing alloys were prepared based on the BAg50CuZn filler metal through electroplating diffusion process, and melting alloying method. The thermodynamics of phase transformations of those fillers were analyzed by non-isothermal differentiation and integration methods of thermal analysis kinetics. In this study, it was demonstrated that as the Sn content increased, the reaction fractional integral curves of AgCuZnSn fillers from solid to liquid became straighter at the endothermic peak. Under the same Sn contents, the reaction fractional integral curve of the Sn-plated filler metal was straighter, and the phase transformation activation energy was higher compared to the traditional silver filler metal. At the 7.2 wt% Sn content, the activation energies and pre-exponential factors of the two fillers reached the maximum, then the phase transformation rate equations of the Sn-plated silver filler and the traditional filler were determined as: k = 1.41 × 1032exp(-5.56 × 105/RT), k = 7.29 × 1020exp(-3.64 × 105/RT), respectively.
Algebraic approach to electronic spectroscopy and dynamics.
Toutounji, Mohamad
2008-04-28
Lie algebra, Zassenhaus, and parameter differentiation techniques are utilized to break up the exponential of a bilinear Hamiltonian operator into a product of noncommuting exponential operators by the virtue of the theory of Wei and Norman [J. Math. Phys. 4, 575 (1963); Proc. Am. Math. Soc., 15, 327 (1964)]. There are about three different ways to find the Zassenhaus exponents, namely, binomial expansion, Suzuki formula, and q-exponential transformation. A fourth, and most reliable method, is provided. Since linearly displaced and distorted (curvature change upon excitation/emission) Hamiltonian and spin-boson Hamiltonian may be classified as bilinear Hamiltonians, the presented algebraic algorithm (exponential operator disentanglement exploiting six-dimensional Lie algebra case) should be useful in spin-boson problems. The linearly displaced and distorted Hamiltonian exponential is only treated here. While the spin-boson model is used here only as a demonstration of the idea, the herein approach is more general and powerful than the specific example treated. The optical linear dipole moment correlation function is algebraically derived using the above mentioned methods and coherent states. Coherent states are eigenvectors of the bosonic lowering operator a and not of the raising operator a(+). While exp(a(+)) translates coherent states, exp(a(+)a(+)) operation on coherent states has always been a challenge, as a(+) has no eigenvectors. Three approaches, and the results, of that operation are provided. Linear absorption spectra are derived, calculated, and discussed. The linear dipole moment correlation function for the pure quadratic coupling case is expressed in terms of Legendre polynomials to better show the even vibronic transitions in the absorption spectrum. Comparison of the present line shapes to those calculated by other methods is provided. Franck-Condon factors for both linear and quadratic couplings are exactly accounted for by the herein calculated linear absorption spectra. This new methodology should easily pave the way to calculating the four-point correlation function, F(tau(1),tau(2),tau(3),tau(4)), of which the optical nonlinear response function may be procured, as evaluating F(tau(1),tau(2),tau(3),tau(4)) is only evaluating the optical linear dipole moment correlation function iteratively over different time intervals, which should allow calculating various optical nonlinear temporal/spectral signals.
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
Innovation Symposium 2017: Transforming the Organization
2017-05-31
ones. Innovation can also take the form of discontinuing an inefficient or out-of-date service, system, or process.3 An Exponential Organization...compelling reasons for this goes beyond the burning of fossil fuels to the fact that an electric car has 90 percent fewer parts, making it exponentially...winner, in a showdown of trivial expertise. In 2016, Watson diagnosed a woman’s rare form of leukemia in just 10 minutes after doctors had spent
Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis
NASA Astrophysics Data System (ADS)
Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.
2013-04-01
We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.
The many faces of the quantum Liouville exponentials
NASA Astrophysics Data System (ADS)
Gervais, Jean-Loup; Schnittger, Jens
1994-01-01
First, it is proven that the three main operator approaches to the quantum Liouville exponentials—that is the one of Gervais-Neveu (more recently developed further by Gervais), Braaten-Curtright-Ghandour-Thorn, and Otto-Weigt—are equivalent since they are related by simple basis transformations in the Fock space of the free field depending upon the zero-mode only. Second, the GN-G expressions for quantum Liouville exponentials, where the U q( sl(2)) quantum-group structure is manifest, are shown to be given by q-binomial sums over powers of the chiral fields in the J = {1}/{2} representation. Third, the Liouville exponentials are expressed as operator tau functions, whose chiral expansion exhibits a q Gauss decomposition, which is the direct quantum analogue of the classical solution of Leznov and Saveliev. It involves q exponentials of quantum-group generators with group "parameters" equal to chiral components of the quantum metric. Fourth, we point out that the OPE of the J = {1}/{2} Liouville exponential provides the quantum version of the Hirota bilinear equation.
Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-02-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-01-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506
High Efficiency Transformation of Cultured Tobacco Cells 1
An, Gynheung
1985-01-01
Tobacco calli were transformed at levels up to 50% by cocultivation of tobacco cultured cells with Agrobacterium tumefaciens harboring the binary transfer-DNA vector, pGA472, containing a kanamycin resistance marker. Transformation frequency was dependent on the physiological state of the tobacco cells, the nature of Agrobacterium strain and, less so, on the expression of the vir genes of the tumor-inducing plasmid. Maximum transformation frequency was obtained with exponentially growing plant cells, suggesting that rapid growth of plant cells is an essental factor for efficient transformation of higher plants. Images Fig. 1 PMID:16664453
An implicit fast Fourier transform method for integration of the time dependent Schrodinger equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riley, M.E.; Ritchie, A.B.
1997-12-31
One finds that the conventional exponentiated split operator procedure is subject to difficulties when solving the time-dependent Schrodinger equation for Coulombic systems. By rearranging the kinetic and potential energy terms in the temporal propagator of the finite difference equations, one can find a propagation algorithm for three dimensions that looks much like the Crank-Nicholson and alternating direction implicit methods for one- and two-space-dimensional partial differential equations. The authors report investigations of this novel implicit split operator procedure. The results look promising for a purely numerical approach to certain electron quantum mechanical problems. A charge exchange calculation is presented as anmore » example of the power of the method.« less
Belkić, Dzevad
2006-12-21
This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Padé transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10(-11) ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical proof-of-principle for the high-resolution power of the FPT for machine accurate input data. Furthermore, it is proven that the FPT is also a highly reliable method for quantifying noise-corrupted time signals reminiscent of those encoded via MRS in clinical neuro-diagnostics.
A Model and a Metric for the Analysis of Status Attainment Processes. Discussion Paper No. 492-78.
ERIC Educational Resources Information Center
Sorensen, Aage B.
This paper proposes a theory of the status attainment process, and specifies it in a mathematical model. The theory justifies a transformation of the conventional status scores to a metric that produces a exponential distribution of attainments, and a transformation of educational attainments to a metric that reflects the competitive advantage…
NASA Astrophysics Data System (ADS)
Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping
2017-11-01
A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.
NASA Astrophysics Data System (ADS)
Qin, Zhang-jian; Chen, Chuan; Luo, Jun-song; Xie, Xing-hong; Ge, Liang-quan; Wu, Qi-fan
2018-04-01
It is a usual practice for improving spectrum quality by the mean of designing a good shaping filter to improve signal-noise ratio in development of nuclear spectroscopy. Another method is proposed in the paper based on discriminating pulse-shape and discarding the bad pulse whose shape is distorted as a result of abnormal noise, unusual ballistic deficit or bad pulse pile-up. An Exponentially Decaying Pulse (EDP) generated in nuclear particle detectors can be transformed into a Mexican Hat Wavelet Pulse (MHWP) and the derivation process of the transform is given. After the transform is performed, the baseline drift is removed in the new MHWP. Moreover, the MHWP-shape can be discriminated with the three parameters: the time difference between the two minima of the MHWP, and the two ratios which are from the amplitude of the two minima respectively divided by the amplitude of the maximum in the MHWP. A new type of nuclear spectroscopy was implemented based on the new digital shaping filter and the Gamma-ray spectra were acquired with a variety of pulse-shape discrimination levels. It had manifested that the energy resolution and the peak-Compton ratio were both improved after the pulse-shape discrimination method was used.
Research on Palmprint Identification Method Based on Quantum Algorithms
Zhang, Zhanzhan
2014-01-01
Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165
Calculation of Rate Spectra from Noisy Time Series Data
Voelz, Vincent A.; Pande, Vijay S.
2011-01-01
As the resolution of experiments to measure folding kinetics continues to improve, it has become imperative to avoid bias that may come with fitting data to a predetermined mechanistic model. Towards this end, we present a rate spectrum approach to analyze timescales present in kinetic data. Computing rate spectra of noisy time series data via numerical discrete inverse Laplace transform is an ill-conditioned inverse problem, so a regularization procedure must be used to perform the calculation. Here, we show the results of different regularization procedures applied to noisy multi-exponential and stretched exponential time series, as well as data from time-resolved folding kinetics experiments. In each case, the rate spectrum method recapitulates the relevant distribution of timescales present in the data, with different priors on the rate amplitudes naturally corresponding to common biases toward simple phenomenological models. These results suggest an attractive alternative to the “Occam’s razor” philosophy of simply choosing models with the fewest number of relaxation rates. PMID:22095854
NASA Astrophysics Data System (ADS)
Nur Wahida Khalili, Noran; Aziz Samson, Abdul; Aziz, Ahmad Sukri Abdul; Ali, Zaileha Md
2017-09-01
In this study, the problem of MHD boundary layer flow past an exponentially stretching sheet with chemical reaction and radiation effects with heat sink is studied. The governing system of PDEs is transformed into a system of ODEs. Then, the system is solved numerically by using Runge-Kutta-Fehlberg fourth fifth order (RKF45) method available in MAPLE 15 software. The numerical results obtained are presented graphically for the velocity, temperature and concentration. The effects of various parameters are studied and analyzed. The numerical values for local Nusselt number, skin friction coefficient and local Sherwood number are tabulated and discussed. The study shows that various parameters give significant effect on the profiles of the fluid flow. It is observed that the reaction rate parameter affected the concentration profiles significantly and the concentration thickness of boundary layer decreases when reaction rate parameter increases. The analysis found is validated by comparing with the results previous work done and it is found to be in good agreement.
NASA Astrophysics Data System (ADS)
Jaradat, H. M.; Syam, Muhammed; Jaradat, M. M. M.; Mustafa, Zead; Moman, S.
2018-03-01
In this paper, we investigate the multiple soliton solutions and multiple singular soliton solutions of a class of the fifth order nonlinear evolution equation with variable coefficients of t using the simplified bilinear method based on a transformation method combined with the Hirota's bilinear sense. In addition, we present analysis for some parameters such as the soliton amplitude and the characteristic line. Several equation in the literature are special cases of the class which we discuss such as Caudrey-Dodd-Gibbon equation and Sawada-Kotera. Comparison with several methods in the literature, such as Helmholtz solution of the inverse variational problem, rational exponential function method, tanh method, homotopy perturbation method, exp-function method, and coth method, are made. From these comparisons, we conclude that the proposed method is efficient and our solutions are correct. It is worth mention that the proposed solution can solve many physical problems.
NASA Astrophysics Data System (ADS)
Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.
2014-10-01
In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
NASA Astrophysics Data System (ADS)
Sravanthi, C. S.; Gorla, R. S. R.
2018-02-01
The aim of this paper is to study the effects of chemical reaction and heat source/sink on a steady MHD (magnetohydrodynamic) two-dimensional mixed convective boundary layer flow of a Maxwell nanofluid over a porous exponentially stretching sheet in the presence of suction/blowing. Convective boundary conditions of temperature and nanoparticle concentration are employed in the formulation. Similarity transformations are used to convert the governing partial differential equations into non-linear ordinary differential equations. The resulting non-linear system has been solved analytically using an efficient technique, namely: the homotopy analysis method (HAM). Expressions for velocity, temperature and nanoparticle concentration fields are developed in series form. Convergence of the constructed solution is verified. A comparison is made with the available results in the literature and our results are in very good agreement with the known results. The obtained results are presented through graphs for several sets of values of the parameters and salient features of the solutions are analyzed. Numerical values of the local skin-friction, Nusselt number and nanoparticle Sherwood number are computed and analyzed.
Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment
NASA Astrophysics Data System (ADS)
Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok
2018-05-01
We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.
Direct measurement of cyclotron coherence times of high-mobility two-dimensional electron gases.
Wang, X; Hilton, D J; Reno, J L; Mittleman, D M; Kono, J
2010-06-07
We have observed long-lived (approximately 30 ps) coherent oscillations of charge carriers due to cyclotron resonance (CR) in high-mobility two-dimensional electrons in GaAs in perpendicular magnetic fields using time-domain terahertz spectroscopy. The observed coherent oscillations were fitted well by sinusoids with exponentially-decaying amplitudes, through which we were able to provide direct and precise measures for the decay times and oscillation frequencies simultaneously. This method thus overcomes the CR saturation effect, which is known to prevent determination of true CR linewidths in high-mobility electron systems using Fourier-transform infrared spectroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less
NASA Astrophysics Data System (ADS)
Bostrom, G.; Atkinson, D.; Rice, A.
2015-04-01
Cavity ringdown spectroscopy (CRDS) uses the exponential decay constant of light exiting a high-finesse resonance cavity to determine analyte concentration, typically via absorption. We present a high-throughput data acquisition system that determines the decay constant in near real time using the discrete Fourier transform algorithm on a field programmable gate array (FPGA). A commercially available, high-speed, high-resolution, analog-to-digital converter evaluation board system is used as the platform for the system, after minor hardware and software modifications. The system outputs decay constants at maximum rate of 4.4 kHz using an 8192-point fast Fourier transform by processing the intensity decay signal between ringdown events. We present the details of the system, including the modifications required to adapt the evaluation board to accurately process the exponential waveform. We also demonstrate the performance of the system, both stand-alone and incorporated into our existing CRDS system. Details of FPGA, microcontroller, and circuitry modifications are provided in the Appendix and computer code is available upon request from the authors.
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...
2018-02-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
Ye, Jun
2016-01-01
An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
Tensor tomography on Cartan–Hadamard manifolds
NASA Astrophysics Data System (ADS)
Lehtonen, Jere; Railo, Jesse; Salo, Mikko
2018-04-01
We study the geodesic x-ray transform on Cartan–Hadamard manifolds, generalizing the x-ray transforms on Euclidean and hyperbolic spaces that arise in medical and seismic imaging. We prove solenoidal injectivity of this transform acting on functions and tensor fields of any order. The functions are assumed to be exponentially decaying if the sectional curvature is bounded, and polynomially decaying if the sectional curvature decays at infinity. This work extends the results of Lehtonen (2016 arXiv:1612.04800) to dimensions n ≥slant 3 and to the case of tensor fields of any order.
Exponential integrators in time-dependent density-functional calculations
NASA Astrophysics Data System (ADS)
Kidd, Daniel; Covington, Cody; Varga, Kálmán
2017-12-01
The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.
Statistical transformation and the interpretation of inpatient glucose control data.
Saulnier, George E; Castro, Janna C; Cook, Curtiss B
2014-03-01
To introduce a statistical method of assessing hospital-based non-intensive care unit (non-ICU) inpatient glucose control. Point-of-care blood glucose (POC-BG) data from hospital non-ICUs were extracted for January 1 through December 31, 2011. Glucose data distribution was examined before and after Box-Cox transformations and compared to normality. Different subsets of data were used to establish upper and lower control limits, and exponentially weighted moving average (EWMA) control charts were constructed from June, July, and October data as examples to determine if out-of-control events were identified differently in nontransformed versus transformed data. A total of 36,381 POC-BG values were analyzed. In all 3 monthly test samples, glucose distributions in nontransformed data were skewed but approached a normal distribution once transformed. Interpretation of out-of-control events from EWMA control chart analyses also revealed differences. In the June test data, an out-of-control process was identified at sample 53 with nontransformed data, whereas the transformed data remained in control for the duration of the observed period. Analysis of July data demonstrated an out-of-control process sooner in the transformed (sample 55) than nontransformed (sample 111) data, whereas for October, transformed data remained in control longer than nontransformed data. Statistical transformations increase the normal behavior of inpatient non-ICU glycemic data sets. The decision to transform glucose data could influence the interpretation and conclusions about the status of inpatient glycemic control. Further study is required to determine whether transformed versus nontransformed data influence clinical decisions or evaluation of interventions.
Harrison, John A
2008-09-04
RHF/aug-cc-pVnZ, UHF/aug-cc-pVnZ, and QCISD/aug-cc-pVnZ, n = 2-5, potential energy curves of H2 X (1) summation g (+) are analyzed by Fourier transform methods after transformation to a new coordinate system via an inverse hyperbolic cosine coordinate mapping. The Fourier frequency domain spectra are interpreted in terms of underlying mathematical behavior giving rise to distinctive features. There is a clear difference between the underlying mathematical nature of the potential energy curves calculated at the HF and full-CI levels. The method is particularly suited to the analysis of potential energy curves obtained at the highest levels of theory because the Fourier spectra are observed to be of a compact nature, with the envelope of the Fourier frequency coefficients decaying in magnitude in an exponential manner. The finite number of Fourier coefficients required to describe the CI curves allows for an optimum sampling strategy to be developed, corresponding to that required for exponential and geometric convergence. The underlying random numerical noise due to the finite convergence criterion is also a clearly identifiable feature in the Fourier spectrum. The methodology is applied to the analysis of MRCI potential energy curves for the ground and first excited states of HX (X = H-Ne). All potential energy curves exhibit structure in the Fourier spectrum consistent with the existence of resonances. The compact nature of the Fourier spectra following the inverse hyperbolic cosine coordinate mapping is highly suggestive that there is some advantage in viewing the chemical bond as having an underlying hyperbolic nature.
A quasiparticle-based multi-reference coupled-cluster method.
Rolik, Zoltán; Kállay, Mihály
2014-10-07
The purpose of this paper is to introduce a quasiparticle-based multi-reference coupled-cluster (MRCC) approach. The quasiparticles are introduced via a unitary transformation which allows us to represent a complete active space reference function and other elements of an orthonormal multi-reference (MR) basis in a determinant-like form. The quasiparticle creation and annihilation operators satisfy the fermion anti-commutation relations. On the basis of these quasiparticles, a generalization of the normal-ordered operator products for the MR case can be introduced as an alternative to the approach of Mukherjee and Kutzelnigg [Recent Prog. Many-Body Theor. 4, 127 (1995); Mukherjee and Kutzelnigg, J. Chem. Phys. 107, 432 (1997)]. Based on the new normal ordering any quasiparticle-based theory can be formulated using the well-known diagram techniques. Beyond the general quasiparticle framework we also present a possible realization of the unitary transformation. The suggested transformation has an exponential form where the parameters, holding exclusively active indices, are defined in a form similar to the wave operator of the unitary coupled-cluster approach. The definition of our quasiparticle-based MRCC approach strictly follows the form of the single-reference coupled-cluster method and retains several of its beneficial properties. Test results for small systems are presented using a pilot implementation of the new approach and compared to those obtained by other MR methods.
Discrete sudden perturbation theory for inelastic scattering. I. Quantum and semiclassical treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cross, R.J.
1985-12-01
A double perturbation theory is constructed to treat rotationally and vibrationally inelastic scattering. It uses both the elastic scattering from the spherically averaged potential and the infinite-order sudden (IOS) approximation as the unperturbed solutions. First, a standard perturbation expansion is done to express the radial wave functions in terms of the elastic wave functions. The resulting coupled equations are transformed to the discrete-variable representation where the IOS equations are diagonal. Then, the IOS solutions are removed from the equations which are solved by an exponential perturbation approximation. The results for Ar+N/sub 2/ are very much more accurate than the IOSmore » and somewhat more accurate than a straight first-order exponential perturbation theory. The theory is then converted into a semiclassical, time-dependent form by using the WKB approximation. The result is an integral of the potential times a slowly oscillating factor over the classical trajectory. A method of interpolating the result is given so that the calculation is done at the average velocity for a given transition. With this procedure, the semiclassical version of the theory is more accurate than the quantum version and very much faster. Calculations on Ar+N/sub 2/ show the theory to be much more accurate than the infinite-order sudden (IOS) approximation and the exponential time-dependent perturbation theory.« less
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Baecklund transformation, Lax pair, and solutions for the Caudrey-Dodd-Gibbon equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu Qixing; Sun Kun; Jiang Yan
2011-01-15
By using Bell polynomials and symbolic computation, we investigate the Caudrey-Dodd-Gibbon equation analytically. Through a generalization of Bells polynomials, its bilinear form is derived, based on which, the periodic wave solution and soliton solutions are presented. And the soliton solutions with graphic analysis are also given. Furthermore, Baecklund transformation and Lax pair are derived via the Bells exponential polynomials. Finally, the Ablowitz-Kaup-Newell-Segur system is constructed.
Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H
2009-10-01
One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.
NASA Astrophysics Data System (ADS)
Shields, M. R.; Bianchi, T. S.; Osburn, C. L.; Kinsey, J. D.; Ziervogel, K.; Schnetzer, A.
2017-12-01
The origin and mechanisms driving the formation of fluorescent dissolved organic matter (FDOM) in the open ocean remain unclear. Although recent studies have attempted to deconvolve the chemical composition and source of marine FDOM, these studies have been qualitative in nature. Here, we investigate these transformations using a more quantitative biomarker approach in a controlled growth and degradation experiment. In this experiment, a natural assemblage of phytoplankton was collected off the coast of North Carolina and incubated within roller bottles containing 0.2 µm-filtered North Atlantic surface water amended with f/2 nutrients. Samples were collected at the beginning (day 0), during exponential growth (day 13), stationary (day 20), and degradation (day 62) phases of the phytoplankton incubation. Amino acids, amino sugars, and phenolic compounds of the dissolved (DOM) were measured in conjunction with enzyme assays and bacterial counts to track shifts in OM quality as FDOM formed and was then transformed throughout the experiment. The results from the chemical analyses showed that the OM composition changed significantly from the initial and exponential phases to the stationary and degradation phases of the experiment. The percentage of aromatic amino acids to the total amino acid pool increased significantly during the exponential phase of phytoplankton growth, but then decreased significantly during the stationary and degradation phases. This increase was positively correlated to the fractional contribution of the protein-like peak in fluorescence to the total FDOM fluorescence. An increase in the concentration of amino acid degradation products during the stationary and degradation phases suggests that compositional changes in OM were driven by microbial transformation. This was further supported by a concurrent increase in total enzyme activity and increase in "humic-like" components of the FDOM. These findings link the properties and formation of FDOM to the overall quality and diagenetic state of marine OM and to the marine carbon and nitrogen cycles.
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Editor)
1988-01-01
The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.
Framework for analyzing ecological trait-based models in multidimensional niche spaces
NASA Astrophysics Data System (ADS)
Biancalani, Tommaso; DeVille, Lee; Goldenfeld, Nigel
2015-05-01
We develop a theoretical framework for analyzing ecological models with a multidimensional niche space. Our approach relies on the fact that ecological niches are described by sequences of symbols, which allows us to include multiple phenotypic traits. Ecological drivers, such as competitive exclusion, are modeled by introducing the Hamming distance between two sequences. We show that a suitable transform diagonalizes the community interaction matrix of these models, making it possible to predict the conditions for niche differentiation and, close to the instability onset, the asymptotically long time population distributions of niches. We exemplify our method using the Lotka-Volterra equations with an exponential competition kernel.
Patel, Mainak; Rangan, Aaditya
2017-08-07
Infant rats randomly cycle between the sleeping and waking states, which are tightly correlated with the activity of mutually inhibitory brainstem sleep and wake populations. Bouts of sleep and wakefulness are random; from P2-P10, sleep and wake bout lengths are exponentially distributed with increasing means, while during P10-P21, the sleep bout distribution remains exponential while the distribution of wake bouts gradually transforms to power law. The locus coeruleus (LC), via an undeciphered interaction with sleep and wake populations, has been shown experimentally to be responsible for the exponential to power law transition. Concurrently during P10-P21, the LC undergoes striking physiological changes - the LC exhibits strong global 0.3 Hz oscillations up to P10, but the oscillation frequency gradually rises and synchrony diminishes from P10-P21, with oscillations and synchrony vanishing at P21 and beyond. In this work, we construct a biologically plausible Wilson Cowan-style model consisting of the LC along with sleep and wake populations. We show that external noise and strong reciprocal inhibition can lead to switching between sleep and wake populations and exponentially distributed sleep and wake bout durations as during P2-P10, with the parameters of inhibition between the sleep and wake populations controlling mean bout lengths. Furthermore, we show that the changing physiology of the LC from P10-P21, coupled with reciprocal excitation between the LC and wake population, can explain the shift from exponential to power law of the wake bout distribution. To our knowledge, this is the first study that proposes a plausible biological mechanism, which incorporates the known changing physiology of the LC, for tying the developing sleep-wake circuit and its interaction with the LC to the transformation of sleep and wake bout dynamics from P2-P21. Copyright © 2017 Elsevier Ltd. All rights reserved.
Xenakis, Nancy
2018-07-01
Since U.S. Congress' 2010 passing of the Affordable Care Act and the creation of numerous care coordination programs, Mount Sinai Hospital's Department of Social Work Services has experienced exponential growth. The Department is deeply committed to recruiting and developing the most talented social workers to best meet the needs of patients and family caregivers and to serve as integral, valued members of interdisciplinary care teams. Traditional learning methods are insufficient for a staff of hundreds, given the changes in health care and the complexity of the work. This necessitates the use of new training and education methods to maintain the quality of professional development. This article provides an overview of the Department's strategy and creation of a professional development learning platform to transform clinical social work practice. It reviews various education models that utilize an e-learning management system and case studies using standardized patients. These models demonstrate innovative learning approaches for both new and experienced social workers in health care. The platform's successes and challenges and recommendations for future development and sustainability are outlined.
Modeling the Gross-Pitaevskii Equation Using the Quantum Lattice Gas Method
NASA Astrophysics Data System (ADS)
Oganesov, Armen
We present an improved Quantum Lattice Gas (QLG) algorithm as a mesoscopic unitary perturbative representation of the mean field Gross Pitaevskii (GP) equation for Bose-Einstein Condensates (BECs). The method employs an interleaved sequence of unitary collide and stream operators. QLG is applicable to many different scalar potentials in the weak interaction regime and has been used to model the Korteweg-de Vries (KdV), Burgers and GP equations. It can be implemented on both quantum and classical computers and is extremely scalable. We present results for 1D soliton solutions with positive and negative internal interactions, as well as vector solitons with inelastic scattering. In higher dimensions we look at the behavior of vortex ring reconnection. A further improvement is considered with a proper operator splitting technique via a Fourier transformation. This is great for quantum computers since the quantum FFT is exponentially faster than its classical counterpart which involves non-local data on the entire lattice (Quantum FFT is the backbone of the Shor algorithm for quantum factorization). We also present an imaginary time method in which we transform the Schrodinger equation into a diffusion equation for recovering ground state initial conditions of a quantum system suitable for the QLG algorithm.
Li, Xiaofan; Fang, Jian-An; Li, Huiyuan
2017-09-01
This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system. By employing linear matrix inequality (LMI) technique and constructing an appropriate Lyapunov-Krasovskii functional, some sufficient synchronization criteria are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the obtained theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Exponential Boundary Observers for Pressurized Water Pipe
NASA Astrophysics Data System (ADS)
Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel
2015-11-01
This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.
The Translated Dowling Polynomials and Numbers.
Mangontarum, Mahid M; Macodi-Ringia, Amila P; Abdulcarim, Normalah S
2014-01-01
More properties for the translated Whitney numbers of the second kind such as horizontal generating function, explicit formula, and exponential generating function are proposed. Using the translated Whitney numbers of the second kind, we will define the translated Dowling polynomials and numbers. Basic properties such as exponential generating functions and explicit formula for the translated Dowling polynomials and numbers are obtained. Convexity, integral representation, and other interesting identities are also investigated and presented. We show that the properties obtained are generalizations of some of the known results involving the classical Bell polynomials and numbers. Lastly, we established the Hankel transform of the translated Dowling numbers.
a Unified Matrix Polynomial Approach to Modal Identification
NASA Astrophysics Data System (ADS)
Allemang, R. J.; Brown, D. L.
1998-04-01
One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Jin-zhong, E-mail: viewsino@163.com; Yao, Shu-zhen; Zhang, Zhong-ping
2013-03-15
With the help of complexity indices, we quantitatively studied multifractals, frequency distributions, and linear and nonlinear characteristics of geochemical data for exploration of the Daijiazhuang Pb-Zn deposit. Furthermore, we derived productivity differentiation models of elements from thermodynamics and self-organized criticality of metallogenic systems. With respect to frequency distributions and multifractals, only Zn in rocks and most elements except Sb in secondary media, which had been derived mainly from weathering and alluviation, exhibit nonlinear distributions. The relations of productivity to concentrations of metallogenic elements and paragenic elements in rocks and those of elements strongly leached in secondary media can be seenmore » as linear addition of exponential functions with a characteristic weak chaos. The relations of associated elements such as Mo, Sb, and Hg in rocks and other elements in secondary media can be expressed as an exponential function, and the relations of one-phase self-organized geological or metallogenic processes can be represented by a power function, each representing secondary chaos or strong chaos. For secondary media, exploration data of most elements should be processed using nonlinear mathematical methods or should be transformed to linear distributions before processing using linear mathematical methods.« less
TIME-DOMAIN METHODS FOR DIFFUSIVE TRANSPORT IN SOFT MATTER
Fricks, John; Yao, Lingxing; Elston, Timothy C.; Gregory Forest, And M.
2015-01-01
Passive microrheology [12] utilizes measurements of noisy, entropic fluctuations (i.e., diffusive properties) of micron-scale spheres in soft matter to infer bulk frequency-dependent loss and storage moduli. Here, we are concerned exclusively with diffusion of Brownian particles in viscoelastic media, for which the Mason-Weitz theoretical-experimental protocol is ideal, and the more challenging inference of bulk viscoelastic moduli is decoupled. The diffusive theory begins with a generalized Langevin equation (GLE) with a memory drag law specified by a kernel [7, 16, 22, 23]. We start with a discrete formulation of the GLE as an autoregressive stochastic process governing microbead paths measured by particle tracking. For the inverse problem (recovery of the memory kernel from experimental data) we apply time series analysis (maximum likelihood estimators via the Kalman filter) directly to bead position data, an alternative to formulas based on mean-squared displacement statistics in frequency space. For direct modeling, we present statistically exact GLE algorithms for individual particle paths as well as statistical correlations for displacement and velocity. Our time-domain methods rest upon a generalization of well-known results for a single-mode exponential kernel [1, 7, 22, 23] to an arbitrary M-mode exponential series, for which the GLE is transformed to a vector Ornstein-Uhlenbeck process. PMID:26412904
NASA Technical Reports Server (NTRS)
Straton, Jack C.
1989-01-01
The Fourier transform of the multicenter product of N 1s hydrogenic orbitals and M Coulomb or Yukawa potentials is given as an (M+N-1)-dimensional Feynman integral with external momenta and shifted coordinates. This is accomplished through the introduction of an integral transformation, in addition to the standard Feynman transformation for the denominators of the momentum representation of the terms in the product, which moves the resulting denominator into an exponential. This allows the angular dependence of the denominator to be combined with the angular dependence in the plane waves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorbachev, D V; Ivanov, V I
Gauss and Markov quadrature formulae with nodes at zeros of eigenfunctions of a Sturm-Liouville problem, which are exact for entire functions of exponential type, are established. They generalize quadrature formulae involving zeros of Bessel functions, which were first designed by Frappier and Olivier. Bessel quadratures correspond to the Fourier-Hankel integral transform. Some other examples, connected with the Jacobi integral transform, Fourier series in Jacobi orthogonal polynomials and the general Sturm-Liouville problem with regular weight are also given. Bibliography: 39 titles.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Saulnier, George E; Castro, Janna C; Cook, Curtiss B
2014-05-01
Glucose control can be problematic in critically ill patients. We evaluated the impact of statistical transformation on interpretation of intensive care unit inpatient glucose control data. Point-of-care blood glucose (POC-BG) data derived from patients in the intensive care unit for 2011 was obtained. Box-Cox transformation of POC-BG measurements was performed, and distribution of data was determined before and after transformation. Different data subsets were used to establish statistical upper and lower control limits. Exponentially weighted moving average (EWMA) control charts constructed from April, October, and November data determined whether out-of-control events could be identified differently in transformed versus nontransformed data. A total of 8679 POC-BG values were analyzed. POC-BG distributions in nontransformed data were skewed but approached normality after transformation. EWMA control charts revealed differences in projected detection of out-of-control events. In April, an out-of-control process resulting in the lower control limit being exceeded was identified at sample 116 in nontransformed data but not in transformed data. October transformed data detected an out-of-control process exceeding the upper control limit at sample 27 that was not detected in nontransformed data. Nontransformed November results remained in control, but transformation identified an out-of-control event less than 10 samples into the observation period. Using statistical methods to assess population-based glucose control in the intensive care unit could alter conclusions about the effectiveness of care processes for managing hyperglycemia. Further study is required to determine whether transformed versus nontransformed data change clinical decisions about the interpretation of care or intervention results. © 2014 Diabetes Technology Society.
Saulnier, George E.; Castro, Janna C.
2014-01-01
Glucose control can be problematic in critically ill patients. We evaluated the impact of statistical transformation on interpretation of intensive care unit inpatient glucose control data. Point-of-care blood glucose (POC-BG) data derived from patients in the intensive care unit for 2011 was obtained. Box–Cox transformation of POC-BG measurements was performed, and distribution of data was determined before and after transformation. Different data subsets were used to establish statistical upper and lower control limits. Exponentially weighted moving average (EWMA) control charts constructed from April, October, and November data determined whether out-of-control events could be identified differently in transformed versus nontransformed data. A total of 8679 POC-BG values were analyzed. POC-BG distributions in nontransformed data were skewed but approached normality after transformation. EWMA control charts revealed differences in projected detection of out-of-control events. In April, an out-of-control process resulting in the lower control limit being exceeded was identified at sample 116 in nontransformed data but not in transformed data. October transformed data detected an out-of-control process exceeding the upper control limit at sample 27 that was not detected in nontransformed data. Nontransformed November results remained in control, but transformation identified an out-of-control event less than 10 samples into the observation period. Using statistical methods to assess population-based glucose control in the intensive care unit could alter conclusions about the effectiveness of care processes for managing hyperglycemia. Further study is required to determine whether transformed versus nontransformed data change clinical decisions about the interpretation of care or intervention results. PMID:24876620
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
The exponential behavior and stabilizability of the stochastic magnetohydrodynamic equations
NASA Astrophysics Data System (ADS)
Wang, Huaqiao
2018-06-01
This paper studies the two-dimensional stochastic magnetohydrodynamic equations which are used to describe the turbulent flows in magnetohydrodynamics. The exponential behavior and the exponential mean square stability of the weak solutions are proved by the application of energy method. Furthermore, we establish the pathwise exponential stability by using the exponential mean square stability. When the stochastic perturbations satisfy certain additional hypotheses, we can also obtain pathwise exponential stability results without using the mean square stability.
Local perturbations perturb—exponentially-locally
NASA Astrophysics Data System (ADS)
De Roeck, W.; Schütz, M.
2015-06-01
We elaborate on the principle that for gapped quantum spin systems with local interaction, "local perturbations [in the Hamiltonian] perturb locally [the groundstate]." This principle was established by Bachmann et al. [Commun. Math. Phys. 309, 835-871 (2012)], relying on the "spectral flow technique" or "quasi-adiabatic continuation" [M. B. Hastings, Phys. Rev. B 69, 104431 (2004)] to obtain locality estimates with sub-exponential decay in the distance to the spatial support of the perturbation. We use ideas of Hamza et al. [J. Math. Phys. 50, 095213 (2009)] to obtain similarly a transformation between gapped eigenvectors and their perturbations that is local with exponential decay. This allows to improve locality bounds on the effect of perturbations on the low lying states in certain gapped models with a unique "bulk ground state" or "topological quantum order." We also give some estimate on the exponential decay of correlations in models with impurities where some relevant correlations decay faster than one would naively infer from the global gap of the system, as one also expects in disordered systems with a localized groundstate.
DNA Microarrays for Aptamer Identification and Structural Characterization
2012-09-01
appropriate vector (which has a unique set of factors affecting cloning efficiency) and transformed into competent bacterial cells to spatially...818-822. 2) Tuerk, C. and Gold, L., “Systematic Evolution of Ligands by Exponential Enrichment: RNA Ligands to Bacteriophage T4 DNA Polymerase
Using the fast fourier transform in binding free energy calculations.
Nguyen, Trung Hai; Zhou, Huan-Xiang; Minh, David D L
2018-04-30
According to implicit ligand theory, the standard binding free energy is an exponential average of the binding potential of mean force (BPMF), an exponential average of the interaction energy between the unbound ligand ensemble and a rigid receptor. Here, we use the fast Fourier transform (FFT) to efficiently evaluate BPMFs by calculating interaction energies when rigid ligand configurations from the unbound ensemble are discretely translated across rigid receptor conformations. Results for standard binding free energies between T4 lysozyme and 141 small organic molecules are in good agreement with previous alchemical calculations based on (1) a flexible complex ( R≈0.9 for 24 systems) and (2) flexible ligand with multiple rigid receptor configurations ( R≈0.8 for 141 systems). While the FFT is routinely used for molecular docking, to our knowledge this is the first time that the algorithm has been used for rigorous binding free energy calculations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
ERIC Educational Resources Information Center
Gilbride, Dennis; Stensrud, Robert
2008-01-01
The gap (structural hole) between the manner in which rehabilitation agencies and business are structured, organized and managed has grown exponentially over the past 10-20 years. Three key changes have radically transformed American business: the globalization of financial capital and competition, the information technology revolution, and the…
Uses and misuses of compositional data in sedimentology
NASA Astrophysics Data System (ADS)
Tolosana-Delgado, Raimon
2012-12-01
This paper serves two goals. The first part shows how mass evolution processes of different nature become undistinguishable once we take a size-limited, noisy sample of its compositional fingerprint: processes of exponential decay, mass mixture and complementary accumulation are simulated, and then samples contaminated with noise are extracted. The aim of this exercise is to illustrate the limitations of typical graphical representations and statistical methods when dealing with compositional data, i.e. data in percentages, concentrations or proportions. The second part presents a series of concepts, tools and methods to represent and statistically treat a compositional data set attending to these limitations. The aim of this second part is to offer a state-of-the-art Compositional Data Analysis. This includes: descriptive statistics and graphics (the biplot); ternary diagrams with confidence regions for the mean; regression and ANalysis-Of-VAriance models to explain compositional variability; and the use of compositional information to predict environmental covariables or discriminate between groups. All these tools share a four-step algorithm: (1) transform compositions with an invertible log-ratio transformation; (2) apply a statistical method to the transformed scores; (3) back-transform the results to compositions; and (4) interpret results in relative terms. Using these techniques, a data set of sand petrographic composition has been analyzed, highlighting that: finer sands are richer in single-crystal grains in relation to polycrystalline grains, and that grain-size accounts for almost all compositional variability; a stronger water flow (river discharge) favors mica grains against quartz or rock fragment grains, possibly due to hydrodynamic sorting effects; a higher relief ratio implies shorter residence times, which may favor survival of micas and rock fragments, relatively more labile grains.
Characterization of Titan III-D Acoustic Pressure Spectra by Least-Squares Fit to Theoretical Model
1980-01-01
P(f) for a set value of P0 and f0" Mhe inverse transform was taken and the result multiplied by a decaying exponential which modelled the envelope of...0 FORWARD TRANSFORM C IF=1 INVERSE TRANSFORM c C M 0 XREAL AND XIMAG RETURNED AS REAL AND IMAG. FOR FORWARD Xr"RM9; C M= " " " MAGNITUDE AND PHASE...34 .. .. C (PHASE IN DEGREE9) C M=2 XREAL RETURNED AS ’PSD’ XIMAG =0. C HERE ’DSD’ MEANS SUM OF N VALUES OF XREAL = MEAN SQU\\Riz OF INPUT C C FOR INVERSE
NASA Technical Reports Server (NTRS)
Fadel, G. M.
1991-01-01
The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.
Algebraic approach to electronic spectroscopy and dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toutounji, Mohamad
Lie algebra, Zassenhaus, and parameter differentiation techniques are utilized to break up the exponential of a bilinear Hamiltonian operator into a product of noncommuting exponential operators by the virtue of the theory of Wei and Norman [J. Math. Phys. 4, 575 (1963); Proc. Am. Math. Soc., 15, 327 (1964)]. There are about three different ways to find the Zassenhaus exponents, namely, binomial expansion, Suzuki formula, and q-exponential transformation. A fourth, and most reliable method, is provided. Since linearly displaced and distorted (curvature change upon excitation/emission) Hamiltonian and spin-boson Hamiltonian may be classified as bilinear Hamiltonians, the presented algebraic algorithm (exponentialmore » operator disentanglement exploiting six-dimensional Lie algebra case) should be useful in spin-boson problems. The linearly displaced and distorted Hamiltonian exponential is only treated here. While the spin-boson model is used here only as a demonstration of the idea, the herein approach is more general and powerful than the specific example treated. The optical linear dipole moment correlation function is algebraically derived using the above mentioned methods and coherent states. Coherent states are eigenvectors of the bosonic lowering operator a and not of the raising operator a{sup +}. While exp(a{sup +}) translates coherent states, exp(a{sup +}a{sup +}) operation on coherent states has always been a challenge, as a{sup +} has no eigenvectors. Three approaches, and the results, of that operation are provided. Linear absorption spectra are derived, calculated, and discussed. The linear dipole moment correlation function for the pure quadratic coupling case is expressed in terms of Legendre polynomials to better show the even vibronic transitions in the absorption spectrum. Comparison of the present line shapes to those calculated by other methods is provided. Franck-Condon factors for both linear and quadratic couplings are exactly accounted for by the herein calculated linear absorption spectra. This new methodology should easily pave the way to calculating the four-point correlation function, F({tau}{sub 1},{tau}{sub 2},{tau}{sub 3},{tau}{sub 4}), of which the optical nonlinear response function may be procured, as evaluating F({tau}{sub 1},{tau}{sub 2},{tau}{sub 3},{tau}{sub 4}) is only evaluating the optical linear dipole moment correlation function iteratively over different time intervals, which should allow calculating various optical nonlinear temporal/spectral signals.« less
Zhu, Chaoyuan; Lin, Sheng Hsien
2006-07-28
Unified semiclasical solution for general nonadiabatic tunneling between two adiabatic potential energy surfaces is established by employing unified semiclassical solution for pure nonadiabatic transition [C. Zhu, J. Chem. Phys. 105, 4159 (1996)] with the certain symmetry transformation. This symmetry comes from a detailed analysis of the reduced scattering matrix for Landau-Zener type of crossing as a special case of nonadiabatic transition and nonadiabatic tunneling. Traditional classification of crossing and noncrossing types of nonadiabatic transition can be quantitatively defined by the rotation angle of adiabatic-to-diabatic transformation, and this rotational angle enters the analytical solution for general nonadiabatic tunneling. The certain two-state exponential potential models are employed for numerical tests, and the calculations from the present general nonadiabatic tunneling formula are demonstrated in very good agreement with the results from exact quantum mechanical calculations. The present general nonadiabatic tunneling formula can be incorporated with various mixed quantum-classical methods for modeling electronically nonadiabatic processes in photochemistry.
A spectral boundary integral equation method for the 2-D Helmholtz equation
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
1994-01-01
In this paper, we present a new numerical formulation of solving the boundary integral equations reformulated from the Helmholtz equation. The boundaries of the problems are assumed to be smooth closed contours. The solution on the boundary is treated as a periodic function, which is in turn approximated by a truncated Fourier series. A Fourier collocation method is followed in which the boundary integral equation is transformed into a system of algebraic equations. It is shown that in order to achieve spectral accuracy for the numerical formulation, the nonsmoothness of the integral kernels, associated with the Helmholtz equation, must be carefully removed. The emphasis of the paper is on investigating the essential elements of removing the nonsmoothness of the integral kernels in the spectral implementation. The present method is robust for a general boundary contour. Aspects of efficient implementation of the method using FFT are also discussed. A numerical example of wave scattering is given in which the exponential accuracy of the present numerical method is demonstrated.
Exploring Cloud Computing for Distance Learning
ERIC Educational Resources Information Center
He, Wu; Cernusca, Dan; Abdous, M'hammed
2011-01-01
The use of distance courses in learning is growing exponentially. To better support faculty and students for teaching and learning, distance learning programs need to constantly innovate and optimize their IT infrastructures. The new IT paradigm called "cloud computing" has the potential to transform the way that IT resources are utilized and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saran, A.; Pazzaglia, S.; Coppola, M.
1991-06-01
We have investigated the effect of fission-spectrum neutron dose fractionation on neoplastic transformation of exponentially growing C3H 10T1/2 cells. Total doses of 10.8, 27, 54, and 108 cGy were given in single doses or in five equal fractions delivered at 24-h intervals in the biological channel of the RSV-TAPIRO reactor at CRE-Casaccia. Both cell inactivation and neoplastic transformation were more effectively induced by fission neutrons than by 250-kVp X rays. No significant effect on cell survival or neoplastic transformation was observed with split doses compared to single doses of fission-spectrum neutrons. Neutron RBE values relative to X rays determined frommore » data for survival and neoplastic transformation were comparable.« less
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
Effects of random aspects of cutting tool wear on surface roughness and tool life
NASA Astrophysics Data System (ADS)
Nabil, Ben Fredj; Mabrouk, Mohamed
2006-10-01
The effects of random aspects of cutting tool flank wear on surface roughness and on tool lifetime, when turning the AISI 1045 carbon steel, were studied in this investigation. It was found that standard deviations corresponding to tool flank wear and to the surface roughness increase exponentially with cutting time. Under cutting conditions that correspond to finishing operations, no significant differences were found between the calculated values of the capability index C p at the steady-state region of the tool flank wear, using the best-fit method or the Box-Cox transformation, or by making the assumption that the surface roughness data are normally distributed. Hence, a method to establish cutting tool lifetime could be established that simultaneously respects the desired average of surface roughness and the required capability index.
Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
1986-01-01
Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
Gong, Shuqing; Yang, Shaofu; Guo, Zhenyuan; Huang, Tingwen
2018-06-01
The paper is concerned with the synchronization problem of inertial memristive neural networks with time-varying delay. First, by choosing a proper variable substitution, inertial memristive neural networks described by second-order differential equations can be transformed into first-order differential equations. Then, a novel controller with a linear diffusive term and discontinuous sign term is designed. By using the controller, the sufficient conditions for assuring the global exponential synchronization of the derive and response neural networks are derived based on Lyapunov stability theory and some inequality techniques. Finally, several numerical simulations are provided to substantiate the effectiveness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyüre, B.; Márkus, B. G.; Bernáth, B.
2015-09-15
We present a novel method to determine the resonant frequency and quality factor of microwave resonators which is faster, more stable, and conceptually simpler than the yet existing techniques. The microwave resonator is pumped with the microwave radiation at a frequency away from its resonance. It then emits an exponentially decaying radiation at its eigen-frequency when the excitation is rapidly switched off. The emitted microwave signal is down-converted with a microwave mixer, digitized, and its Fourier transformation (FT) directly yields the resonance curve in a single shot. Being a FT based method, this technique possesses the Fellgett (multiplex) and Connesmore » (accuracy) advantages and it conceptually mimics that of pulsed nuclear magnetic resonance. We also establish a novel benchmark to compare accuracy of the different approaches of microwave resonator measurements. This shows that the present method has similar accuracy to the existing ones, which are based on sweeping or modulating the frequency of the microwave radiation.« less
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Working Virtually: Transforming the Mobile Workplace. 2nd Edition
ERIC Educational Resources Information Center
Hoefling, Trina
2017-01-01
Remote working is the new reality, and transactional work--provided by freelancers, contract employees or consultants--has increased exponentially. It is forecast that as much as half the labor force will be working independently and virtually by 2020. Most organizations are still grappling with how to effectively manage their virtual staff and…
Toward Transformation: Digital Tools for Online Dance Pedagogy
ERIC Educational Resources Information Center
Parrish, Mila
2016-01-01
Media advances have changed the ways in which we interact, communicate, teach, and learn. The growth of telecommunication, video sharing sites, specifically YouTube, and social media, have exponentially increased the number of people interested in dance and dance education. Technology presents new ways for students to think about their learning,…
A Guide to Help Consumers Choose Apps and Avoid App Overload
ERIC Educational Resources Information Center
Schuster, Ellen; Zimmerman, Lynda
2014-01-01
Mobile technology has transformed the way consumers access and use information. The exponential growth of mobile apps makes finding suitable, easy-to-use nutrition and health-related apps challenging. A guide for consumers helps them ask important questions before downloading apps. The guide can be adapted for other Extension disciplines.
Observation of amorphous to crystalline phase transformation in Te substituted Sn-Sb-Se thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chander, Ravi, E-mail: rcohri@yahoo.com
2015-05-15
Thin films of Sn-Sb-Se-Te (8 ≤ x ≤ 14) chalcogenide system were prepared by thermal evaporation technique using melt quenched bulk samples. The as-prepared thin films were found amorphous as evidenced from X-ray diffraction studies. Resistivity measurement showed an exponential decrease with temperature upto critical temperature (transition temperature) beyond which a sharp decrease was observed and with further increase in temperature showed an exponential decrease in resistivity with different activation energy. The transition temperature showed a decreasing trend with tellurium content in the sample. The resistivity measurement during cooling run showed no abrupt change in resistivity. The resistivity measurements ofmore » annealed films did not show any abrupt change revealing the structural transformation occurring in the material. The transition width showed an increase with tellurium content in the sample. The resistivity ratio showed two order of magnitude improvements for sample with higher tellurium content. The observed transition temperature in this system was found quite less than already commercialized Ge-Sb-Te system for optical and electronic memories.« less
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
NASA Astrophysics Data System (ADS)
Ahmad, Rida; Mustafa, M.; Hayat, T.; Alsaedi, A.
2016-06-01
Recent advancements in nanotechnology have led to the discovery of new generation coolants known as nanofluids. Nanofluids possess novel and unique characteristics which are fruitful in numerous cooling applications. Current work is undertaken to address the heat transfer in MHD three-dimensional flow of magnetic nanofluid (ferrofluid) over a bidirectional exponentially stretching sheet. The base fluid is considered as water which consists of magnetite-Fe3O4 nanoparticles. Exponentially varying surface temperature distribution is accounted. Problem formulation is presented through the Maxwell models for effective electrical conductivity and effective thermal conductivity of nanofluid. Similarity transformations give rise to a coupled non-linear differential system which is solved numerically. Appreciable growth in the convective heat transfer coefficient is observed when nanoparticle volume fraction is augmented. Temperature exponent parameter serves to enhance the heat transfer from the surface. Moreover the skin friction coefficient is directly proportional to both magnetic field strength and nanoparticle volume fraction.
Li, Yuelin; Jiang, Zhang; Lin, Xiao -Min; ...
2015-01-30
Many potential industrial, medical, and environmental applications of metal nanorods rely on the physics and resultant kinetics and dynamics of the interaction of these particles with light. We report a surprising kinetics transition in the global melting of femtosecond laser-driven gold nanorod aqueous colloidal suspension. At low laser intensity, the melting exhibits a stretched exponential kinetics, which abruptly transforms into a compressed exponential kinetics when the laser intensity is raised. It is found the relative formation and reduction rate of intermediate shapes play a key role in the transition. Supported by both molecular dynamics simulations and a kinetic model, themore » behavior is traced back to the persistent heterogeneous nature of the shape dependence of the energy uptake, dissipation and melting of individual nanoparticles. These results could have significant implications for various applications such as water purification and electrolytes for energy storage that involve heat transport between metal nanorod ensembles and surrounding solvents.« less
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
How to decompose arbitrary continuous-variable quantum operations.
Sefi, Seckin; van Loock, Peter
2011-10-21
We present a general, systematic, and efficient method for decomposing any given exponential operator of bosonic mode operators, describing an arbitrary multimode Hamiltonian evolution, into a set of universal unitary gates. Although our approach is mainly oriented towards continuous-variable quantum computation, it may be used more generally whenever quantum states are to be transformed deterministically, e.g., in quantum control, discrete-variable quantum computation, or Hamiltonian simulation. We illustrate our scheme by presenting decompositions for various nonlinear Hamiltonians including quartic Kerr interactions. Finally, we conclude with two potential experiments utilizing offline-prepared optical cubic states and homodyne detections, in which quantum information is processed optically or in an atomic memory using quadratic light-atom interactions. © 2011 American Physical Society
NASA Astrophysics Data System (ADS)
Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly
2016-03-01
This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Designing the optimal shutter sequences for the flutter shutter imaging method
NASA Astrophysics Data System (ADS)
Jelinek, Jan
2010-04-01
Acquiring iris or face images of moving subjects at larger distances using a flash to prevent the motion blur quickly runs into eye safety concerns as the acquisition distance is increased. For that reason the flutter shutter method recently proposed by Raskar et al.has generated considerable interest in the biometrics community. The paper concerns the design of shutter sequences that produce the best images. The number of possible sequences grows exponentially in both the subject' s motion velocity and desired exposure value, with their majority being useless. Because the exact solution leads to an intractable mixed integer programming problem, we propose an approximate solution based on pre - screening the sequences according to the distribution of roots in their Fourier transform. A very fast algorithm utilizing the Jury' s criterion allows the testing to be done without explicitly computing the roots, making the approach practical for moderately long sequences.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.
2018-03-01
We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.
Lie algebraic similarity transformed Hamiltonians for lattice model systems
NASA Astrophysics Data System (ADS)
Wahlen-Strothman, Jacob M.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Scuseria, Gustavo E.
2015-01-01
We present a class of Lie algebraic similarity transformations generated by exponentials of two-body on-site Hermitian operators whose Hausdorff series can be summed exactly without truncation. The correlators are defined over the entire lattice and include the Gutzwiller factor ni ↑ni ↓ , and two-site products of density (ni ↑+ni ↓) and spin (ni ↑-ni ↓) operators. The resulting non-Hermitian many-body Hamiltonian can be solved in a biorthogonal mean-field approach with polynomial computational cost. The proposed similarity transformation generates locally weighted orbital transformations of the reference determinant. Although the energy of the model is unbound, projective equations in the spirit of coupled cluster theory lead to well-defined solutions. The theory is tested on the one- and two-dimensional repulsive Hubbard model where it yields accurate results for small and medium sized interaction strengths.
Jia, Xianbo; Lin, Xinjian; Chen, Jichen
2017-11-02
Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.
USDA-ARS?s Scientific Manuscript database
An exponential increase in our understanding of genomes, proteomes, and metabolomes provides greater impetus to address critical biotechnological issues such as sustainable production of biofuels and bio-based chemicals and, in particular, the development of improved microbial biocatalysts for use i...
Transformative Role of Epigenetics in Child Development Research: Commentary on the Special Section
ERIC Educational Resources Information Center
Keating, Daniel P.
2016-01-01
Lester, Conradt, and Marsit (2016) have assembled a set of articles that bring to readers of "Child Development" the scope and impact of the exponentially growing research on epigenetics and child development. This commentary aims to place this work in a broader context of theory and research by (a) providing a conceptual framework for…
On zero variance Monte Carlo path-stretching schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lux, I.
1983-08-01
A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game.
From Research to Praxis: Empowering Trinidadian Primary School Teachers via Action Research
ERIC Educational Resources Information Center
Bissessar, Charmaine S.
2015-01-01
An exponential body of extant research illustrates the symbiotic dyad action research, andragogy, reflective praxis, and transformative learning share. This paper contains a narrative review of 83 action research papers submitted to the researcher as part of the fulfilment of the Bachelor of Education degree from April 2011 to May 2013.…
Bannon, Catherine C; Campbell, Douglas A
2017-01-01
Diatoms are marine primary producers that sink in part due to the density of their silica frustules. Sinking of these phytoplankters is crucial for both the biological pump that sequesters carbon to the deep ocean and for the life strategy of the organism. Sinking rates have been previously measured through settling columns, or with fluorimeters or video microscopy arranged perpendicularly to the direction of sinking. These side-view techniques require large volumes of culture, specialized equipment and are difficult to scale up to multiple simultaneous measures for screening. We established a method for parallel, large scale analysis of multiple phytoplankton sinking rates through top-view monitoring of chlorophyll a fluorescence in microtitre well plates. We verified the method through experimental analysis of known factors that influence sinking rates, including exponential versus stationary growth phase in species of different cell sizes; Thalassiosira pseudonana CCMP1335, chain-forming Skeletonema marinoi RO5A and Coscinodiscus radiatus CCMP312. We fit decay curves to an algebraic transform of the decrease in fluorescence signal as cells sank away from the fluorometer detector, and then used minimal mechanistic assumptions to extract a sinking rate (m d-1) using an RStudio script, SinkWORX. We thereby detected significant differences in sinking rates as larger diatom cells sank faster than smaller cells, and cultures in stationary phase sank faster than those in exponential phase. Our sinking rate estimates accord well with literature values from previously established methods. This well plate-based method can operate as a high throughput integrative phenotypic screen for factors that influence sinking rates including macromolecular allocations, nutrient availability or uptake rates, chain-length or cell size, degree of silification and progression through growth stages. Alternately the approach can be used to phenomically screen libraries of mutants.
NASA Astrophysics Data System (ADS)
Krishna, P. Mohan; Sandeep, N.; Sharma, Ram Prakash
2017-05-01
This paper presents the two-dimensional magnetohydrodynamic Carreau fluid flow over a plane and parabolic regions in the form of buoyancy and exponential heat source effects. Soret and Dufour effects are used to examine the heat and mass transfer process. The system of ODE's is obtained by utilizing similarity transformations. The RK-based shooting process is employed to generate the numerical solutions. The impact of different parameters of interest on fluid flow, concentration and thermal fields is characterized graphically. Tabular results are presented to discuss the wall friction, reduced Nusselt and Sherwood numbers. It is seen that the flow, thermal and concentration boundary layers of the plane and parabolic flows of Carreau fluid are non-uniform.
A Wave Diagnostics in Geophysics: Algorithmic Extraction of Atmosphere Disturbance Modes
NASA Astrophysics Data System (ADS)
Leble, S.; Vereshchagin, S.
2018-04-01
The problem of diagnostics in geophysics is discussed and a proposal based on dynamic projecting operators technique is formulated. The general exposition is demonstrated by an example of symbolic algorithm for the wave and entropy modes in the exponentially stratified atmosphere. The novel technique is developed as a discrete version for the evolution operator and the corresponding projectors via discrete Fourier transformation. Its explicit realization for directed modes in exponential one-dimensional atmosphere is presented via the correspondent projection operators in its discrete version in terms of matrices with a prescribed action on arrays formed from observation tables. A simulation based on opposite directed (upward and downward) wave train solution is performed and the modes' extraction from a mixture is illustrated.
Camera flash heating of a three-layer solid composite: An approximate solution
NASA Astrophysics Data System (ADS)
Jibrin, Sani; Moksin, Mohd Maarof; Husin, Mohd Shahril; Zakaria, Azmi; Hassan, Jumiah; Talib, Zainal Abidin
2014-03-01
Camera flash heating and the subsequent thermal wave propagation in a solid composite material is studied using the Laplace transform technique. Full-field rear surface temperature for a single-layer, two-layer and three-layer solid composites are obtained directly from the Laplace transform conversion tables as opposed to the tedious inversion process by integral transform method. This is achieved by first expressing the hyperbolic-transcendental equation in terms of negative exponentials of square root of s/α and expanded same in a series by the binomial theorem. Electrophoretic deposition (EPD) and dip coating processes were used to prepare three-layer solid composites consisting ZnO/Cu/ZnO and starch/Al/starch respectively. About 0.5ml of deionized water enclosed within an air-tight aluminium container serves as the third three layer sample (AL/water/AL). Thermal diffusivity experiments were carried out on all the three samples prepared. Using Scaled Levenberg-Marquardt algorithm, the approximate temperature curve for the three-layer solid composite is fitted with the corresponding experimental result. The agreement between the theoretical curve and the experimental data as well as that between the obtained thermal diffusivity values for the ZnO, aluminium and deionized water in this work and similar ones found in literature is found to be very good.
Variable mass pendulum behaviour processed by wavelet analysis
NASA Astrophysics Data System (ADS)
Caccamo, M. T.; Magazù, S.
2017-01-01
The present work highlights how, in order to characterize the motion of a variable mass pendulum, wavelet analysis can be an effective tool in furnishing information on the time evolution of the oscillation spectral content. In particular, the wavelet transform is applied to process the motion of a hung funnel that loses fine sand at an exponential rate; it is shown how, in contrast to the Fourier transform which furnishes only an average frequency value for the motion, the wavelet approach makes it possible to perform a joint time-frequency analysis. The work is addressed at undergraduate and graduate students.
Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-01-01
X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) both reveal dynamics using coherent scattering, but X-rays permit investigating of dynamics in a much more diverse array of materials. Heterogeneous dynamics occur in many such materials, and we showed how classic tools employed in analysis of heterogeneous DLS dynamics extend to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. This work presents the software implementation of inverse transform analysis of XPCS data called CONTIN XPCS, an extension of traditional CONTIN that accommodates dynamics encountered in equilibrium XPCS measurements. PMID:29875507
Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-02-01
X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) both reveal dynamics using coherent scattering, but X-rays permit investigating of dynamics in a much more diverse array of materials. Heterogeneous dynamics occur in many such materials, and we showed how classic tools employed in analysis of heterogeneous DLS dynamics extend to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. This work presents the software implementation of inverse transform analysis of XPCS data called CONTIN XPCS, an extension of traditional CONTIN that accommodates dynamics encountered in equilibrium XPCS measurements.
Kinetics of the mechanochemical synthesis of alkaline-earth metal amides
NASA Astrophysics Data System (ADS)
Garroni, Sebastiano; Takacs, Laszlo; Leng, Haiyan; Delogu, Francesco
2014-07-01
A phenomenological framework is developed to model the kinetics of the formation of alkaline-earth metal amides by the ball milling induced reaction of their hydrides with gaseous ammonia. It is shown that the exponential character of the kinetic curves is modulated by the increase of the total volume of the powder inside the reactor due to the substantially larger molar volume of the products compared to the reactants. It is claimed that the volume of powder effectively processed during each collision connects the transformation rate to the physical and chemical processes underlying the mechanochemical transformations.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.
1987-01-01
An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that were more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.
exponential finite difference technique for solving partial differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Handschuh, R.F.
1987-01-01
An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that weremore » more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.« less
CD8 Memory Cells Develop Unique DNA Repair Mechanisms Favoring Productive Division.
Galgano, Alessia; Barinov, Aleksandr; Vasseur, Florence; de Villartay, Jean-Pierre; Rocha, Benedita
2015-01-01
Immune responses are efficient because the rare antigen-specific naïve cells are able to proliferate extensively and accumulate upon antigen stimulation. Moreover, differentiation into memory cells actually increases T cell accumulation, indicating improved productive division in secondary immune responses. These properties raise an important paradox: how T cells may survive the DNA lesions necessarily induced during their extensive division without undergoing transformation. We here present the first data addressing the DNA damage responses (DDRs) of CD8 T cells in vivo during exponential expansion in primary and secondary responses in mice. We show that during exponential division CD8 T cells engage unique DDRs, which are not present in other exponentially dividing cells, in T lymphocytes after UV or X irradiation or in non-metastatic tumor cells. While in other cell types a single DDR pathway is affected, all DDR pathways and cell cycle checkpoints are affected in dividing CD8 T cells. All DDR pathways collapse in secondary responses in the absence of CD4 help. CD8 T cells are driven to compulsive suicidal divisions preventing the propagation of DNA lesions. In contrast, in the presence of CD4 help all the DDR pathways are up regulated, resembling those present in metastatic tumors. However, this up regulation is present only during the expansion phase; i.e., their dependence on antigen stimulation prevents CD8 transformation. These results explain how CD8 T cells maintain genome integrity in spite of their extensive division, and highlight the fundamental role of DDRs in the efficiency of CD8 immune responses.
Wang, M D; Fan, W H; Qiu, W S; Zhang, Z L; Mo, Y N; Qiu, F
2014-06-01
We present here the exponential function which transforms the Abbreviated Injury Scale (AIS). It is called the Exponential Injury Severity Score (EISS), and significantly outperforms the venerable but dated New Injury Severity Score (NISS) and Injury Severity Score (ISS) as a predictor of mortality. The EISS is defined as a change of AIS values by raising each AIS severity score (1-6) by 3 taking a power of AIS minus 2 and then summing the three most severe injuries (i.e., highest AIS), regardless of body regions. EISS values were calculated for every patient in two large independent data sets: 3,911 and 4,129 patients treated during a 6-year period at the Class A tertiary hospitals in China. The power of the EISS to predict mortality was then compared with previously calculated NISS values for the same patients in each of the two data sets. We found that the EISS is more predictive of survival [Zhejiang: area under the receiver operating characteristic curve (AUC): NISS = 0.932, EISS = 0.949, P = 0.0115; Liaoning: AUC: NISS = 0.924, EISS = 0.942, P = 0.0139]. Moreover, the EISS provides a better fit throughout its entire range of prediction (Hosmer-Lemeshow statistic for Zhejiang: NISS = 21.86, P = 0.0027, EISS = 13.52, P = 0.0604; Liaoning: NISS = 23.27, P = 0.0015, EISS = 15.55, P = 0.0164). The EISS may be used as the standard summary measure of human trauma.
Optimal Alignment of Structures for Finite and Periodic Systems.
Griffiths, Matthew; Niblett, Samuel P; Wales, David J
2017-10-10
Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.
Method for exponentiating in cryptographic systems
Brickell, Ernest F.; Gordon, Daniel M.; McCurley, Kevin S.
1994-01-01
An improved cryptographic method utilizing exponentiation is provided which has the advantage of reducing the number of multiplications required to determine the legitimacy of a message or user. The basic method comprises the steps of selecting a key from a preapproved group of integer keys g; exponentiating the key by an integer value e, where e represents a digital signature, to generate a value g.sup.e ; transmitting the value g.sup.e to a remote facility by a communications network; receiving the value g.sup.e at the remote facility; and verifying the digital signature as originating from the legitimate user. The exponentiating step comprises the steps of initializing a plurality of memory locations with a plurality of values g.sup.xi ; computi The United States Government has rights in this invention pursuant to Contract No. DE-AC04-76DP00789 between the Department of Energy and AT&T Company.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoynov, Y.; Dineva, P.
The stress, magnetic and electric field analysis of multifunctional composites, weakened by impermeable cracks, is of fundamental importance for their structural integrity and reliable service performance. The aim is to study dynamic behavior of a plane of functionally graded magnetoelectroelastic composite with more than one crack. The coupled material properties vary exponentially in an arbitrary direction. The plane is subjected to anti-plane mechanical and in-plane electric and magnetic load. The boundary value problem described by the partial differential equations with variable coefficients is reduced to a non-hypersingular traction boundary integral equation based on the appropriate functional transform and frequency-dependent fundamentalmore » solution derived in a closed form by Radon transform. Software code based on the boundary integral equation method (BIEM) is developed, validated and inserted in numerical simulations. The obtained results show the sensitivity of the dynamic stress, magnetic and electric field concentration in the cracked plane to the type and characteristics of the dynamic load, to the location and cracks disposition, to the wave-crack-crack interactions and to the magnitude and direction of the material gradient.« less
Coutris, G
1981-12-01
Sixty-six patients with chronic myelogenous leukemia, all with Philadelphia chromosome, have been studied for chromosomic abnormalities associated (CAA) to Ph', as well as for actuarial curve of survivorship. Patients dying from another disease were excluded from this study. Frequency of cells with CAA was measured and appeared strongly higher after blastic transformation than during myelocytic state; probability to be a blastic transformation is closely correlated with this frequency. On the other hand, actuarial curve of survivorship is very well represented by an exponential curve. This suggests a constant rate of death during disease evolution, for these patients without intercurrent disease. As a mean survivance after blastic transformation is very shorter than myelocytic duration, a constant rate of blastic transformation could be advanced: it explains possible occurrence of transformation as soon as preclinic state of a chronic myelogenous leukemia. Even if CAA frequency increases after blastic transformation, CAA can occur a long time before it and do not explain it: submicroscopic origin should be searched for the constant rate of blastic transformation would express the risk of a genic transformation at a constant rate during myelocytic state.
NASA Astrophysics Data System (ADS)
Silva, Valdelírio da Silva e.; Régis, Cícero; Howard, Allen Q., Jr.
2014-02-01
This paper analyses the details of a procedure for the numerical integration of Hankel transforms in the calculation of the electromagnetic fields generated by a large horizontal loop over a 1D earth. The method performs the integration by deforming the integration path into the complex plane and applying Cauchy's theorem on a modified version of the integrand. The modification is the replacement of the Bessel functions J0 and J1 by the Hankel functions H_0^{(1)} and H_1^{(1)} respectively. The integration in the complex plane takes advantage of the exponentially decaying behaviour of the Hankel functions, allowing calculation on very small segments, instead of the infinite line of the original improper integrals. A crucial point in this problem is the location of the poles. The companion paper shows two methods to estimate the pole locations. We have used this method to calculate the fields of very large loops. Our results show that this method allows the estimation of the integrals with fewer evaluations of the integrand functions than other methods.
Exponential stability of stochastic complex networks with multi-weights based on graph theory
NASA Astrophysics Data System (ADS)
Zhang, Chunmei; Chen, Tianrui
2018-04-01
In this paper, a novel approach to exponential stability of stochastic complex networks with multi-weights is investigated by means of the graph-theoretical method. New sufficient conditions are provided to ascertain the moment exponential stability and almost surely exponential stability of stochastic complex networks with multiple weights. It is noted that our stability results are closely related with multi-weights and the intensity of stochastic disturbance. Numerical simulations are also presented to substantiate the theoretical results.
S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation
2014-01-01
Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620
Ground State and Finite Temperature Lanczos Methods
NASA Astrophysics Data System (ADS)
Prelovšek, P.; Bonča, J.
The present review will focus on recent development of exact- diagonalization (ED) methods that use Lanczos algorithm to transform large sparse matrices onto the tridiagonal form. We begin with a review of basic principles of the Lanczos method for computing ground-state static as well as dynamical properties. Next, generalization to finite-temperatures in the form of well established finite-temperature Lanczos method is described. The latter allows for the evaluation of temperatures T>0 static and dynamic quantities within various correlated models. Several extensions and modification of the latter method introduced more recently are analysed. In particular, the low-temperature Lanczos method and the microcanonical Lanczos method, especially applicable within the high-T regime. In order to overcome the problems of exponentially growing Hilbert spaces that prevent ED calculations on larger lattices, different approaches based on Lanczos diagonalization within the reduced basis have been developed. In this context, recently developed method based on ED within a limited functional space is reviewed. Finally, we briefly discuss the real-time evolution of correlated systems far from equilibrium, which can be simulated using the ED and Lanczos-based methods, as well as approaches based on the diagonalization in a reduced basis.
NASA Astrophysics Data System (ADS)
de Andrea González, Ángel; González-Gutiérrez, Leo M.
2017-09-01
The Rayleigh-Taylor instability (RTI) in an infinite slab where a constant density lower fluid is initially separated from an upper stratified fluid is discussed in linear regime. The upper fluid is of increasing exponential density and surface tension is considered between both of them. It was found useful to study stability by using the initial value problem approach (IVP), so that we ensure the inclusion of certain continuum modes, otherwise neglected. This methodology includes the branch cut in the complex plane, consequently, in addition to discrete modes (surface RTI modes), a set of continuum modes (internal RTI modes) also appears. As a result, the usual information given by the normal mode method is now complete. Furthermore, a new role is found for surface tension: to transform surface RTI modes (discrete spectrum) into internal RTI modes belonging to a continuous spectrum at a critical wavenumber. As a consequence, the cut-off wavenumber disappears: i.e. the growth rate of the RTI surface mode does not decay to zero at the cut-off wavenumber, as previous researchers used to believe. Finally, we found that, due to the continuum, the asymptotic behavior of the perturbation with respect to time is slower than the exponential when only the continuous spectrum exists.
Estimating chlorophyll content of spartina alterniflora at leaf level using hyper-spectral data
NASA Astrophysics Data System (ADS)
Wang, Jiapeng; Shi, Runhe; Liu, Pudong; Zhang, Chao; Chen, Maosi
2017-09-01
Spartina alterniflora, one of most successful invasive species in the world, was firstly introduced to China in 1979 to accelerate sedimentation and land formation via so-called "ecological engineering", and it is now widely distributed in coastal saltmarshes in China. A key question is how to retrieve chlorophyll content to reflect growth status, which has important implication of potential invasiveness. In this work, an estimation model of chlorophyll content of S. alterniflora was developed based on hyper-spectral data in the Dongtan Wetland, Yangtze Estuary, China. The spectral reflectance of S. alterniflora leaves and their corresponding chlorophyll contents were measured, and then the correlation analysis and regression (i.e., linear, logarithmic, quadratic, power and exponential regression) method were established. The spectral reflectance was transformed and the feature parameters (i.e., "san bian", "lv feng" and "hong gu") were extracted to retrieve the chlorophyll content of S. alterniflora . The results showed that these parameters had a large correlation coefficient with chlorophyll content. On the basis of the correlation coefficient, mathematical models were established, and the models of power and exponential based on SDb had the least RMSE and larger R2 , which had a good performance regarding the inversion of chlorophyll content of S. alterniflora.
Scott, Jill R.; Ham, Jason E.; Durham, Bill; ...
2004-01-01
Metal polypyridines are excellent candidates for gas-phase optical experiments where their intrinsic properties can be studied without complications due to the presence of solvent. The fluorescence lifetimes of [Ru(bpy) 3 ] 1+ trapped in an optical detection cell within a Fourier transform mass spectrometer were obtained using matrix-assisted laser desorption/ionization to generate the ions with either 2,5-dihydroxybenzoic acid (DHB) or sinapinic acid (SA) as matrix. All transients acquired, whether using DHB or SA for ion generation, were best described as approximately exponential decays. The rate constant for transients derived using DHB as matrix was 4×10 7 s −1 , whilemore » the rate constant using SA was 1×10 7 s −1 . Some suggestions of multiple exponential decay were evident although limited by the quality of the signals. Photodissociation experiments revealed that [Ru(bpy) 3 ] 1+ generated using DHB can decompose to [Ru(bpy) 2 ] 1+ , whereas ions generated using SA showed no decomposition. Comparison of the mass spectra with the fluorescence lifetimes illustrates the promise of incorporating optical detection with trapped ion mass spectrometry techniques.« less
Neoplastic transformation of hamster embryo cells by heavy ions
NASA Astrophysics Data System (ADS)
Han, Z.; Suzuki, H.; Suzuki, F.; Suzuki, M.; Furusawa, Y.; Kato, T.; Ikenaga, M.
1998-11-01
We have studied the induction of morphological transformation of Syrian hamster embryo cells by low doses of heavy ions with different linear energy transfer (LET), ranging from 13 to 400 keV/μm. Exponentially growing cells were irradiated with 12C or 28Si ion beams generated by the Heavy Ion Medical Accelerator in Chiba (HIMAC), inoculated to culture dishes, and transformed colonies were identified when the cells were densely stacked and showed a crisscross pattern. Over the LET range examined, the frequency of transformation induced by the heavy ions increased sharply at very low doses no greater than 5 cGy. The relative biological effectiveness (RBE) of the heavy ions relative to 250 kVp X-rays showed an initial increase with LET, reaching a maximum value of about 7 at 100 keV/μm, and then decreased with the further increase in LET. Thus, we confirmed that high LET heavy ions are significantly more effective than X-rays for the induction of in vitro cell transformation.
Han, Z B; Suzuki, H; Suzuki, F; Suzuki, M; Furusawa, Y; Kato, T; Ikenaga, M
1998-09-01
Syrian hamster embryo cells were used to study the morphological transformation induced by accelerated heavy ions with different linear energy transfer (LET) ranging from 13 to 400 keV/micron. Exponentially growing cells were irradiated with 12C or 28Si ion beams generated by the Heavy Ion Medical Accelerator in Chiba (HIMAC), then inoculated to culture dishes. Morphologically altered colonies were scored as transformants. Over the LET range examined, the frequency of transformation induced by the heavy ions increased sharply at very low doses no greater than 5 cGy. The relative biological effectiveness (RBE) of the heavy ions relative to X-rays first increased with LET, reached a maximum value of about 7 at 100 keV/micron, then decreased with the further increase of LET. Our findings confirmed that high LET heavy ions are much more effective than X-rays for the induction of in vitro cell transformation.
Neoplastic transformation of hamster embyro cells by heavy ions.
Han, Z; Suzuki, H; Suzuki, F; Suzuki, M; Furusawa, Y; Kato, T; Ikenaga, M
1998-01-01
We have studied the induction of morphological transformation of Syrian hamster embryo cells by low doses of heavy ions with different linear energy transfer (LET), ranging from 13 to 400 keV/micrometer. Exponentially growing cells were irradiated with 12C or 28Si ion beams generated by the Heavy Ion Medical Accelerator in Chiba (HIMAC), inoculated to culture dishes, and transformed colonies were identified when the cells were densely stacked and showed a crisscross pattern. Over the LET range examined, the frequency of transformation induced by the heavy ions increased sharply at very low doses no greater than 5 cGy. The relative biological effectiveness (RBE) of the heavy ions relative to 250 kVp X-rays showed an initial increase with LET, reaching a maximum value of about 7 at 100 keV/micrometer, and then decreased with the further increase in LET. Thus, we confirmed that high LET heavy ions are significantly more effective than X-rays for the induction of in vitro cell transformation.
NASA Astrophysics Data System (ADS)
Gan, Wen-Cong; Shu, Fu-Wen
Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable computational form by neural network method [G. Carleo and M. Troyer, Science 355 (2017) 602, arXiv:1606.02318.] The power of deep neural network (DNN) based on deep learning is clarified by mapping it to renormalization group (RG), which may shed lights on holographic principle by identifying a sequence of RG transformations to the AdS geometry. In this paper, we show that any network which reflects RG process has intrinsic hyperbolic geometry, and discuss the structure of entanglement encoded in the graph of DNN. We find the entanglement structure of DNN is of Ryu-Takayanagi form. Based on these facts, we argue that the emergence of holographic gravitational theory is related to deep learning process of the quantum-field theory.
Hamilton-Jacobi formalism for inflation with non-minimal derivative coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheikhahmadi, Haidar; Saridakis, Emmanuel N.; Aghamohammadi, Ali
2016-10-01
In inflation with nonminimal derivative coupling there is not a conformal transformation to the Einstein frame where calculations are straightforward, and thus in order to extract inflationary observables one needs to perform a detailed and lengthy perturbation investigation. In this work we bypass this problem by performing a Hamilton-Jacobi analysis, namely rewriting the cosmological equations considering the scalar field to be the time variable. We apply the method to two specific models, namely the power-law and the exponential cases, and for each model we calculate various observables such as the tensor-to-scalar ratio, and the spectral index and its running. Wemore » compare them with 2013 and 2015 Planck data, and we show that they are in a very good agreement with observations.« less
Investigating the structure preserving encryption of high efficiency video coding (HEVC)
NASA Astrophysics Data System (ADS)
Shahid, Zafar; Puech, William
2013-02-01
This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
The Analysis of Fluorescence Decay by a Method of Moments
Isenberg, Irvin; Dyson, Robert D.
1969-01-01
The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139
NASA Astrophysics Data System (ADS)
Mu, G. Y.; Mi, X. Z.; Wang, F.
2018-01-01
The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
Shpotyuk, O; Brunner, M; Hadzaman, I; Balitska, V; Klym, H
2016-12-01
Mathematical models of degradation-relaxation kinetics are considered for jammed thick-film systems composed of screen-printed spinel Cu 0.1 Ni 0.1 Co 1.6 Mn 1.2 O 4 and conductive Ag or Ag-Pd alloys. Structurally intrinsic nanoinhomogeneous ceramics due to Ag and Ag-Pd diffusing agents embedded in a spinel phase environment are shown to define governing kinetics of thermally induced degradation under 170 °C obeying an obvious non-exponential behavior in a negative relative resistance drift. The characteristic stretched-to-compressed exponential crossover is detected for degradation-relaxation kinetics in thick-film systems with conductive contacts made of Ag-Pd and Ag alloys. Under essential migration of a conductive phase, Ag penetrates thick-film spinel ceramics via a considerable two-step diffusing process.
Cell Division and Evolution of Biological Tissues
NASA Astrophysics Data System (ADS)
Rivier, Nicolas; Arcenegui-Siemens, Xavier; Schliecker, Gudrun
A tissue is a geometrical, space-filling, random cellular network; it remains in this steady state while individual cells divide. Cell division (fragmentation) is a local, elementary topological transformation which establishes statistical equilibrium of the structure. Statistical equilibrium is characterized by observable relations (Lewis, Aboav) between cell shapes, sizes and those of their neighbours, obtained through maximum entropy and topological correlation extending to nearest neighbours only, i.e. maximal randomness. For a two-dimensional tissue (epithelium), the distribution of cell shapes and that of mother and daughter cells can be obtained from elementary geometrical and physical arguments, except for an exponential factor favouring division of larger cells, and exponential and combinatorial factors encouraging a most symmetric division. The resulting distributions are very narrow, and stationarity severely restricts the range of an adjustable structural parameter
Ultrastable light sources in the crossover from superradiance to lasing
NASA Astrophysics Data System (ADS)
Xu, Minghui; Tieri, David; Holland, Murray
2013-05-01
We theoretically investigate the crossover from steady-state superradiance to optical lasing. An exact solution of the quantum master equation is difficult to obtain due to the exponential scaling of the Hilbert space dimension with system size. However, since Lindblad operators in the master equation are invariant under SU(4) transformations, we are able to reduce the exponential scaling of the problem to cubic by expanding the density matrix in terms of an SU(4) basis. In this way, we obtain exact quantum solutions of the superradiance-laser crossover. We use this theory to investigate the potential for ultrastable lasers in the millihertz linewidth regime, and find the behavior of important observables, such as intensity, linewidth, spin-correlation, and entanglement. This work was supported by the DARPA QUASAR program and NSF.
Tree tensor network approach to simulating Shor's algorithm
NASA Astrophysics Data System (ADS)
Dumitrescu, Eugene
2017-12-01
Constructively simulating quantum systems furthers our understanding of qualitative and quantitative features which may be analytically intractable. In this paper, we directly simulate and explore the entanglement structure present in the paradigmatic example for exponential quantum speedups: Shor's algorithm. To perform our simulation, we construct a dynamic tree tensor network which manifestly captures two salient circuit features for modular exponentiation. These are the natural two-register bipartition and the invariance of entanglement with respect to permutations of the top-register qubits. Our construction help identify the entanglement entropy properties, which we summarize by a scaling relation. Further, the tree network is efficiently projected onto a matrix product state from which we efficiently execute the quantum Fourier transform. Future simulation of quantum information states with tensor networks exploiting circuit symmetries is discussed.
Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian
2012-08-01
We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.
Compact exponential product formulas and operator functional derivative
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte
2008-04-01
The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.
Maji, Kaushik; Kouri, Donald J
2011-03-28
We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.
NASA Astrophysics Data System (ADS)
Koh, Yang Wei
2018-03-01
In current studies of mean-field quantum spin systems, much attention is placed on the calculation of the ground-state energy and the excitation gap, especially the latter, which plays an important role in quantum annealing. In pure systems, the finite gap can be obtained by various existing methods such as the Holstein-Primakoff transform, while the tunneling splitting at first-order phase transitions has also been studied in detail using instantons in many previous works. In disordered systems, however, it remains challenging to compute the gap of large-size systems with specific realization of disorder. Hitherto, only quantum Monte Carlo techniques are practical for such studies. Recently, Knysh [Nature Comm. 7, 12370 (2016), 10.1038/ncomms12370] proposed a method where the exponentially large dimensionality of such systems is condensed onto a random potential of much lower dimension, enabling efficient study of such systems. Here we propose a slightly different approach, building upon the method of static approximation of the partition function widely used for analyzing mean-field models. Quantum effects giving rise to the excitation gap and nonextensive corrections to the free energy are accounted for by incorporating dynamical paths into the path integral. The time-dependence of the trace of the time-ordered exponential of the effective Hamiltonian is calculated by solving a differential equation perturbatively, yielding a finite-size series expansion of the path integral. Formulae for the first excited-state energy are proposed to aid in computing the gap. We illustrate our approach using the infinite-range ferromagnetic Ising model and the Hopfield model, both in the presence of a transverse field.
Force on a storage ring vacuum chamber after sudden turn-off of a magnet power supply
NASA Astrophysics Data System (ADS)
Sinha, Gautam; Prabhu, S. S.
2011-10-01
We are commissioning a 2.5 GeV synchrotron radiation source (SRS) where electrons travel in high vacuum inside the vacuum chambers made of aluminum alloys. These chambers are kept between the pole gaps of magnets and are made to facilitate the radiation coming out of the storage ring to the experimental station. These chambers are connected by metallic bellows. During the commissioning phase of the SRS, the metallic bellows became ruptured due to the frequent tripping of the dipole magnet power supply. The machine was down for quite some time. In the case of a power supply trip, the current in the magnets decays exponentially. It was observed experimentally that the fast B field decay generates a large eddy current in the chambers and consequently the chambers are subjected to a huge Lorentz force. This motivated us to develop a theoretical model to study the force acting on a metallic plate when exposed to an exponentially decaying field and then to extend it for a rectangular vacuum chamber. The problem is formulated using Maxwell’s equations and converted to the inhomogeneous Helmholtz equation. After taking the Laplace transform, the equation is solved with appropriate boundary conditions. Final results are obtained after taking the appropriate inverse Laplace transform. The expressions for eddy current contour and magnetic field produced by the eddy current are also derived. Variations of the force on chambers of different wall thickness due to spatially varying and exponentially time decaying field are presented. The result is a general theory which can be applied to different geometries and calculation of power loss as well. Comparisons are made with results obtained by simulation using a finite element based code, for quick verification of the theoretical model.
ERIC Educational Resources Information Center
Vaurio, Rebecca G.; Simmonds, Daniel J.; Mostofsky, Stewart H.
2009-01-01
One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the…
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
Compact exponential product formulas and operator functional derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, M.
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}
How bootstrap can help in forecasting time series with more than one seasonal pattern
NASA Astrophysics Data System (ADS)
Cordeiro, Clara; Neves, M. Manuela
2012-09-01
The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.
Kim, Seongho; Jang, Hyejeong; Koo, Imhoi; Lee, Joohyoung; Zhang, Xiang
2017-01-01
Compared to other analytical platforms, comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has much increased separation power for analysis of complex samples and thus is increasingly used in metabolomics for biomarker discovery. However, accurate peak detection remains a bottleneck for wide applications of GC×GC-MS. Therefore, the normal-exponential-Bernoulli (NEB) model is generalized by gamma distribution and a new peak detection algorithm using the normal-gamma-Bernoulli (NGB) model is developed. Unlike the NEB model, the NGB model has no closed-form analytical solution, hampering its practical use in peak detection. To circumvent this difficulty, three numerical approaches, which are fast Fourier transform (FFT), the first-order and the second-order delta methods (D1 and D2), are introduced. The applications to simulated data and two real GC×GC-MS data sets show that the NGB-D1 method performs the best in terms of both computational expense and peak detection performance.
Exact solutions for sound radiation from a moving monopole above an impedance plane.
Ochmann, Martin
2013-04-01
The acoustic field of a monopole source moving with constant velocity at constant height above an infinite locally reacting plane can be expressed in analytical form by combining the Lorentz transformation with the method of superimposing complex or real point sources. For a plane with masslike response, the solution in Lorentz space consists of a superposition of monopoles only and therefore, does not differ in principle from the solution for the corresponding stationary boundary value problem. However, by considering a frequency independent surface impedance, e.g., with pure absorbing behavior, the half-space Green's function is now comprised of not only a line of monopoles but also of dipoles. For certain field points at a special line g, this solution can be written explicitly by using an exponential integral. For arbitrary field points, the method of stationary phase leads to an asymptotic solution for the reflection coefficient which agrees with prior results from the literature.
Exponential propagators for the Schrödinger equation with a time-dependent potential.
Bader, Philipp; Blanes, Sergio; Kopylov, Nikita
2018-06-28
We consider the numerical integration of the Schrödinger equation with a time-dependent Hamiltonian given as the sum of the kinetic energy and a time-dependent potential. Commutator-free (CF) propagators are exponential propagators that have shown to be highly efficient for general time-dependent Hamiltonians. We propose new CF propagators that are tailored for Hamiltonians of the said structure, showing a considerably improved performance. We obtain new fourth- and sixth-order CF propagators as well as a novel sixth-order propagator that incorporates a double commutator that only depends on coordinates, so this term can be considered as cost-free. The algorithms require the computation of the action of exponentials on a vector similar to the well-known exponential midpoint propagator, and this is carried out using the Lanczos method. We illustrate the performance of the new methods on several numerical examples.
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
Solution of the finite Milne problem in stochastic media with RVT Technique
NASA Astrophysics Data System (ADS)
Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.
2017-12-01
This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.
NASA Technical Reports Server (NTRS)
Balachandar, S.; Yuen, D. A.; Reuteler, D. M.
1995-01-01
We have applied spectral-transform methods to study three-dimensional thermal convection with temperature-dependent viscosity. The viscosity varies exponentially with the form exp(-BT), where B controls the viscosity contrast and T is temperature. Solutions for high Rayleigh numbers, up to an effective Ra of 6.25 x 10(exp 6), have been obtained for an aspect-ratio of 5x5x1 and a viscosity contrast of 25. Solutions show the localization of toroidal velocity fields with increasing vigor of convection to a coherent network of shear-zones. Viscous dissipation increases with Rayleigh number and is particularly strong in regions of convergent flows and shear deformation. A time-varying depth-dependent mean-flow is generated because of the correlation between laterally varying viscosity and velocity gradients.
Asymptotic decay and non-rupture of viscous sheets
NASA Astrophysics Data System (ADS)
Fontelos, Marco A.; Kitavtsev, Georgy; Taranets, Roman M.
2018-06-01
For a nonlinear system of coupled PDEs, that describes evolution of a viscous thin liquid sheet and takes account of surface tension at the free surface, we show exponential (H^1, L^2) asymptotic decay to the flat profile of its solutions considered with general initial data. Additionally, by transforming the system to Lagrangian coordinates we show that the minimal thickness of the sheet stays positive for all times. This result proves the conjecture formally accepted in the physical literature (cf. Eggers and Fontelos in Singularities: formation, structure, and propagation. Cambridge Texts in Applied Mathematics, Cambridge, 2015), that a viscous sheet cannot rupture in finite time in the absence of external forcing. Moreover, in the absence of surface tension we find a special class of initial data for which the Lagrangian solution exhibits L^2-exponential decay to the flat profile.
Thermal dynamics on the lattice with exponentially improved accuracy
NASA Astrophysics Data System (ADS)
Pawlowski, Jan M.; Rothkopf, Alexander
2018-03-01
We present a novel simulation prescription for thermal quantum fields on a lattice that operates directly in imaginary frequency space. By distinguishing initial conditions from quantum dynamics it provides access to correlation functions also outside of the conventional Matsubara frequencies ωn = 2 πnT. In particular it resolves their frequency dependence between ω = 0 and ω1 = 2 πT, where the thermal physics ω ∼ T of e.g. transport phenomena is dominantly encoded. Real-time spectral functions are related to these correlators via an integral transform with rational kernel, so that their unfolding from the novel simulation data is exponentially improved compared to standard Euclidean simulations. We demonstrate this improvement within a non-trivial 0 + 1-dimensional quantum mechanical toy-model and show that spectral features inaccessible in standard Euclidean simulations are quantitatively captured.
Exponential instability in the fractional Calderón problem
NASA Astrophysics Data System (ADS)
Rüland, Angkana; Salo, Mikko
2018-04-01
In this paper we prove the exponential instability of the fractional Calderón problem and thus prove the optimality of the logarithmic stability estimate from Rüland and Salo (2017 arXiv:1708.06294). In order to infer this result, we follow the strategy introduced by Mandache in (2001 Inverse Problems 17 1435) for the standard Calderón problem. Here we exploit a close relation between the fractional Calderón problem and the classical Poisson operator. Moreover, using the construction of a suitable orthonormal basis, we also prove (almost) optimality of the Runge approximation result for the fractional Laplacian, which was derived in Rüland and Salo (2017 arXiv:1708.06294). Finally, in one dimension, we show a close relation between the fractional Calderón problem and the truncated Hilbert transform.
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Simulation of Vortex Structure in Supersonic Free Shear Layer Using Pse Method
NASA Astrophysics Data System (ADS)
Guo, Xin; Wang, Qiang
The method of parabolized stability equations (PSE) are applied in the analysis of nonlinear stability and the simulation of flow structure in supersonic free shear layer. High accuracy numerical techniques including self-similar basic flow, high order differential method, appropriate transformation and decomposition of nonlinear terms are adopted and developed to solve the PSE effectively for free shear layer. The spatial evolving unstable waves which dominate the flow structure are investigated through nonlinear coupling spatial marching methods. The nonlinear interactions between harmonic waves are further analyzed and instantaneous flow field are obtained by adding the harmonic waves into basic flow. Relevant data agree well with that of DNS. The results demonstrate that T-S wave does not keeping growing exponential as the linear evolution, the energy transfer to high order harmonic modes and finally all harmonic modes get saturation due to the nonlinear interaction; Mean flow distortion is produced by the nonlinear interaction between the harmonic and its conjugate harmonic, makes great change to the average flow and increases the thickness of shear layer; PSE methods can well capture the large scale nonlinear flow structure in the supersonic free shear layer such as vortex roll-up, vortex pairing and nonlinear saturation.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Cole-Davidson dynamics of simple chain models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dotson, Taylor C.; McCoy, John Dwane; Adolf, Douglas Brian
2008-10-01
Rotational relaxation functions of the end-to-end vector of short, freely jointed and freely rotating chains were determined from molecular dynamics simulations. The associated response functions were obtained from the one-sided Fourier transform of the relaxation functions. The Cole-Davidson function was used to fit the response functions with extensive use being made of Cole-Cole plots in the fitting procedure. For the systems studied, the Cole-Davidson function provided remarkably accurate fits [as compared to the transform of the Kohlrausch-Williams-Watts (KWW) function]. The only appreciable deviations from the simulation results were in the high frequency limit and were due to ballistic or freemore » rotation effects. The accuracy of the Cole-Davidson function appears to be the result of the transition in the time domain from stretched exponential behavior at intermediate time to single exponential behavior at long time. Such a transition can be explained in terms of a distribution of relaxation times with a well-defined longest relaxation time. Since the Cole-Davidson distribution has a sharp cutoff in relaxation time (while the KWW function does not), it makes sense that the Cole-Davidson would provide a better frequency-domain description of the associated response function than the KWW function does.« less
1996-09-16
approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Immittance Data Validation by Kramers‐Kronig Relations – Derivation and Implications
2017-01-01
Abstract Explicitly based on causality, linearity (superposition) and stability (time invariance) and implicit on continuity (consistency), finiteness (convergence) and uniqueness (single valuedness) in the time domain, Kramers‐Kronig (KK) integral transform (KKT) relations for immittances are derived as pure mathematical constructs in the complex frequency domain using the two‐sided (bilateral) Laplace integral transform (LT) reduced to the Fourier domain for sufficiently rapid exponential decaying, bounded immittances. Novel anti KK relations are also derived to distinguish LTI (linear, time invariant) systems from non‐linear, unstable and acausal systems. All relations can be used to test KK transformability on the LTI principles of linearity, stability and causality of measured and model data by Fourier transform (FT) in immittance spectroscopy (IS). Also, integral transform relations are provided to estimate (conjugate) immittances at zero and infinite frequency particularly useful to normalise data and compare data. Also, important implications for IS are presented and suggestions for consistent data analysis are made which generally apply likewise to complex valued quantities in many fields of engineering and natural sciences. PMID:29577007
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Peng, Junbiao
2018-01-01
A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously. PMID:29534492
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
Mortier, Eric; Simon, Yorick; Dahoun, Abdelsellam; Gerdolle, David
2009-01-01
The purpose of this study was to evaluate the influence of photopolymerization mode with a light emitting diode (LED) lamp on the curing contraction kinetics and degree of conversion of 3 resin-based restorative materials. The curing contraction kinetics of Admira (ADM), Filtek P60 (P60), and Filtek Flow (FLO) were measured by the glass slide method. The materials were exposed to light from a 1,000 mW/cm-(2) power LED lamp (Elipar Freelight 2) in 3 modes: 2 continuous modes of 20 and 40 seconds (C20 and C40), and 1 exponential mode (E20; 5 seconds of exponential power increase followed by 15 seconds of maximum intensity). The degree of conversion (DC) was measured for each of the materials, and each of the modes by Fourier transformed infra-red spectrometry. P60 had the significantly lowest final contraction and FLO the highest among all light exposure modes. The C20 and C40 modes did not produce any difference in contraction or degree of conversion. The E20 mode led to a significant slowing of contraction speed combined with greater final contraction. Use of a LED lamp (1,000 mW/cm2) in continuous mode reduces the exposure time by half for identical curing shrinkage and degree of conversion.
Efficient and effective pruning strategies for health data de-identification.
Prasser, Fabian; Kohlmayer, Florian; Kuhn, Klaus A
2016-04-30
Privacy must be protected when sensitive biomedical data is shared, e.g. for research purposes. Data de-identification is an important safeguard, where datasets are transformed to meet two conflicting objectives: minimizing re-identification risks while maximizing data quality. Typically, de-identification methods search a solution space of possible data transformations to find a good solution to a given de-identification problem. In this process, parts of the search space must be excluded to maintain scalability. The set of transformations which are solution candidates is typically narrowed down by storing the results obtained during the search process and then using them to predict properties of the output of other transformations in terms of privacy (first objective) and data quality (second objective). However, due to the exponential growth of the size of the search space, previous implementations of this method are not well-suited when datasets contain many attributes which need to be protected. As this is often the case with biomedical research data, e.g. as a result of longitudinal collection, we have developed a novel method. Our approach combines the mathematical concept of antichains with a data structure inspired by prefix trees to represent properties of a large number of data transformations while requiring only a minimal amount of information to be stored. To analyze the improvements which can be achieved by adopting our method, we have integrated it into an existing algorithm and we have also implemented a simple best-first branch and bound search (BFS) algorithm as a first step towards methods which fully exploit our approach. We have evaluated these implementations with several real-world datasets and the k-anonymity privacy model. When integrated into existing de-identification algorithms for low-dimensional data, our approach reduced memory requirements by up to one order of magnitude and execution times by up to 25 %. This allowed us to increase the size of solution spaces which could be processed by almost a factor of 10. When using the simple BFS method, we were able to further increase the size of the solution space by a factor of three. When used as a heuristic strategy for high-dimensional data, the BFS approach outperformed a state-of-the-art algorithm by up to 12 % in terms of the quality of output data. This work shows that implementing methods of data de-identification for real-world applications is a challenging task. Our approach solves a problem often faced by data custodians: a lack of scalability of de-identification software when used with datasets having realistic schemas and volumes. The method described in this article has been implemented into ARX, an open source de-identification software for biomedical data.
NASA Astrophysics Data System (ADS)
Ebaid, Abdelhalim; Wazwaz, Abdul-Majid; Alali, Elham; Masaedeh, Basem S.
2017-03-01
Very recently, it was observed that the temperature of nanofluids is finally governed by second-order ordinary differential equations with variable coefficients of exponential orders. Such coefficients were then transformed to polynomials type by using new independent variables. In this paper, a class of second-order ordinary differential equations with variable coefficients of polynomials type has been solved analytically. The analytical solution is expressed in terms of a hypergeometric function with generalized parameters. Moreover, applications of the present results have been applied on some selected nanofluids problems in the literature. The exact solutions in the literature were derived as special cases of our generalized analytical solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan
X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) reveal materials dynamics using coherent scattering, with XPCS permitting the investigation of dynamics in a more diverse array of materials than DLS. Heterogeneous dynamics occur in many material systems. The authors' recent work has shown how classic tools employed in the DLS analysis of heterogeneous dynamics can be extended to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. The present work describes the software implementation of inverse transform analysis of XPCS data. This software, calledCONTIN XPCS, is an extension of traditionalCONTINanalysis and accommodates the various dynamics encountered inmore » equilibrium XPCS measurements.« less
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...
2018-02-01
X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) reveal materials dynamics using coherent scattering, with XPCS permitting the investigation of dynamics in a more diverse array of materials than DLS. Heterogeneous dynamics occur in many material systems. The authors' recent work has shown how classic tools employed in the DLS analysis of heterogeneous dynamics can be extended to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. The present work describes the software implementation of inverse transform analysis of XPCS data. This software, calledCONTIN XPCS, is an extension of traditionalCONTINanalysis and accommodates the various dynamics encountered inmore » equilibrium XPCS measurements.« less
One- and two-center ETF-integrals of first order in relativistic calculation of NMR parameters
NASA Astrophysics Data System (ADS)
Slevinsky, R. M.; Temga, T.; Mouattamid, M.; Safouhi, H.
2010-06-01
The present work focuses on the analytical and numerical developments of first-order integrals involved in the relativistic calculation of the shielding tensor using exponential-type functions as a basis set of atomic orbitals. For the analytical development, we use the Fourier integral transformation and practical properties of spherical harmonics and the Rayleigh expansion of the plane wavefunctions. The Fourier transforms of the operators were derived in previous work and they are used for analytical development. In both the one- and two-center integrals, Cauchy's residue theorem is used in the final developments of the analytical expressions, which are shown to be accurate to machine precision.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
Protein structural similarity search by Ramachandran codes
Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang
2007-01-01
Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
Optimal mode transformations for linear-optical cluster-state generation
Uskov, Dmitry B.; Lougovski, Pavel; Alsing, Paul M.; ...
2015-06-15
In this paper, we analyze the generation of linear-optical cluster states (LOCSs) via sequential addition of one and two qubits. Existing approaches employ the stochastic linear-optical two-qubit controlled-Z (CZ) gate with success rate of 1/9 per operation. The question of optimality of the CZ gate with respect to LOCS generation has remained open. We report that there are alternative schemes to the CZ gate that are exponentially more efficient and show that sequential LOCS growth is indeed globally optimal. We find that the optimal cluster growth operation is a state transformation on a subspace of the full Hilbert space. Finally,more » we show that the maximal success rate of postselected entangling n photonic qubits or m Bell pairs into a cluster is (1/2) n-1 and (1/4) m-1, respectively, with no ancilla photons, and we give an explicit optical description of the optimal mode transformations.« less
Reductive dehalogenation of carbon tetrachloride by Escherichia coli K-12
DOE Office of Scientific and Technical Information (OSTI.GOV)
Criddle, C.S.; DeWitt, J.T.; McCarty, P.L.
1990-11-01
The formation of radicals from carbon tetrachloride (CT) is often invoked to explain the product distribution resulting from its transformation. Radicals formed by reduction of CT presumably react with constituents of the surrounding milieu to give the observed product distribution. The patterns of transformation observed in this work were consistent with such as hypothesis. In cultures of Escherichia coli K-12, the pathways and rates of CT transformation were dependent on the electron acceptor condition of the media. Use of oxygen and nitrate as electron acceptors generally prevented CT metabolism. At low oxygen levels ({approximately}1%), however, transformation of ({sup 14}C)CT tomore » {sup 14}CO{sub 2} and attachment to cell material did occur, in accord with reports of CT fate in mammalian cell cultures. Under fumarate-respiring conditions, ({sup 14}C)CT was recovered as {sup 14}CO{sub 2}, chloroform, and a nonvolatile fraction. In contrast, fermenting conditions resulted in more chloroform, more cell-bound {sup 14}C, and almost no {sup 14}CO{sub 2}. Rates of transformation of CT were faster under fermenting conditions than under fumarate-respiring conditions. Transformation rates also decreased over time, suggesting the gradual exhaustion of transformation activity. This loss was modeled with a simple exponential decay term.« less
NASA Astrophysics Data System (ADS)
Tønning, Erik; Polders, Daniel; Callaghan, Paul T.; Engelsen, Søren B.
2007-09-01
This paper demonstrates how the multi-linear PARAFAC model can with advantage be used to decompose 2D diffusion-relaxation correlation NMR spectra prior to 2D-Laplace inversion to the T2- D domain. The decomposition is advantageous for better interpretation of the complex correlation maps as well as for the quantification of extracted T2- D components. To demonstrate the new method seventeen mixtures of wheat flour, starch, gluten, oil and water were prepared and measured with a 300 MHz nuclear magnetic resonance (NMR) spectrometer using a pulsed gradient stimulated echo (PGSTE) pulse sequence followed by a Carr-Purcell-Meiboom-Gill (CPMG) pulse echo train. By varying the gradient strength, 2D diffusion-relaxation data were recorded for each sample. From these double exponentially decaying relaxation data the PARAFAC algorithm extracted two unique diffusion-relaxation components, explaining 99.8% of the variation in the data set. These two components were subsequently transformed to the T2- D domain using 2D-inverse Laplace transformation and quantitatively assigned to the oil and water components of the samples. The oil component was one distinct distribution with peak intensity at D = 3 × 10 -12 m 2 s -1 and T2 = 180 ms. The water component consisted of two broad populations of water molecules with diffusion coefficients and relaxation times centered around correlation pairs: D = 10 -9 m 2 s -1, T2 = 10 ms and D = 3 × 10 -13 m 2 s -1, T2 = 13 ms. Small spurious peaks observed in the inverse Laplace transformation of original complex data were effectively filtered by the PARAFAC decomposition and thus considered artefacts from the complex Laplace transformation. The oil-to-water ratio determined by PARAFAC followed by 2D-Laplace inversion was perfectly correlated with known oil-to-water ratio of the samples. The new method of using PARAFAC prior to the 2D-Laplace inversion proved to have superior potential in analysis of diffusion-relaxation spectra, as it improves not only the interpretation, but also the quantification.
NASA Astrophysics Data System (ADS)
Mahanthesh, B.; Gireesha, B. J.; Shashikumar, N. S.; Hayat, T.; Alsaedi, A.
2018-06-01
Present work aims to investigate the features of the exponential space dependent heat source (ESHS) and cross-diffusion effects in Marangoni convective heat mass transfer flow due to an infinite disk. Flow analysis is comprised with magnetohydrodynamics (MHD). The effects of Joule heating, viscous dissipation and solar radiation are also utilized. The thermal and solute field on the disk surface varies in a quadratic manner. The ordinary differential equations have been obtained by utilizing Von Kármán transformations. The resulting problem under consideration is solved numerically via Runge-Kutta-Fehlberg based shooting scheme. The effects of involved pertinent flow parameters are explored by graphical illustrations. Results point out that the ESHS effect dominates thermal dependent heat source effect on thermal boundary layer growth. The concentration and temperature distributions and their associated layer thicknesses are enhanced by Marangoni effect.
Improving deep convolutional neural networks with mixed maxout units.
Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.
New dimensions for wound strings: The modular transformation of geometry to topology
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGreevy, John; Silverstein, Eva; Starr, David
2007-02-15
We show, using a theorem of Milnor and Margulis, that string theory on compact negatively curved spaces grows new effective dimensions as the space shrinks, generalizing and contextualizing the results in E. Silverstein, Phys. Rev. D 73, 086004 (2006).. Milnor's theorem relates negative sectional curvature on a compact Riemannian manifold to exponential growth of its fundamental group, which translates in string theory to a higher effective central charge arising from winding strings. This exponential density of winding modes is related by modular invariance to the infrared small perturbation spectrum. Using self-consistent approximations valid at large radius, we analyze this correspondencemore » explicitly in a broad set of time-dependent solutions, finding precise agreement between the effective central charge and the corresponding infrared small perturbation spectrum. This indicates a basic relation between geometry, topology, and dimensionality in string theory.« less
NASA Astrophysics Data System (ADS)
Rana, B. M. Jewel; Ahmed, Rubel; Ahmmed, S. F.
2017-06-01
An analysis is carried out to investigate the effects of variable viscosity, thermal radiation, absorption of radiation and cross diffusion past an inclined exponential accelerated plate under the influence of variable heat and mass transfer. A set of suitable transformations has been used to obtain the non-dimensional coupled governing equations. Explicit finite difference technique has been used to solve the obtained numerical solutions of the present problem. Stability and convergence of the finite difference scheme have been carried out for this problem. Compaq Visual Fortran 6.6a has been used to calculate the numerical results. The effects of various physical parameters on the fluid velocity, temperature, concentration, coefficient of skin friction, rate of heat transfer, rate of mass transfer, streamlines and isotherms on the flow field have been presented graphically and discussed in details.
NASA Astrophysics Data System (ADS)
Golub, V. P.; Pavlyuk, Ya. V.; Fernati, P. V.
2013-03-01
The parameters of fractional-exponential hereditary kernels for nonlinear viscoelastic materials are determined. Methods for determining the parameters used in the third-order theory of viscoelasticity and in nonlinear theories based on the similarity of primary creep curves and the similarity of isochronous creep curves are analyzed. The parameters of fractional-exponential hereditary kernels are determined and tested against experimental data for microplastic, TC-8/3-250 glass-reinforced plastics, SVAM glass-reinforced plastics. The results (tables and plots) are analyzed
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Krueger, W B; Kolodziej, B J
1976-01-01
Both atomic absorption spectrophotometry (AAS) and neutron activation analysis have been utilized to determine cellular Cu levels in Bacillus megaterium ATCC 19213. Both methods were selected for their sensitivity to detection of nanogram quantities of Cu. Data from both methods demonstrated identical patterms of Cu uptake during exponenetial growth and sporulation. Late exponential phase cells contained less Cu than postexponential t2 cells while t5 cells contained amounts equivalent to exponential cells. The t11 phase-bright forespore containing cells had a higher Cu content than those of earlier time periods, and the free spores had the highest Cu content. Analysis of the culture medium by AAS corroborated these data by showing concomitant Cu uptake during exponential growth and into t2 postexponential phase of sporulation. From t2 to t4, Cu egressed from the cells followed by a secondary uptake during the maturation of phase-dark forespores into phase-bright forespores (t6--t9).
Integrability: mathematical methods for studying solitary waves theory
NASA Astrophysics Data System (ADS)
Wazwaz, Abdul-Majid
2014-03-01
In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the real features in a variety of vital areas in science, technology and engineering. In recognition of the importance of solitary waves theory and the underlying concept of integrable equations, a variety of powerful methods have been developed to carry out the required analysis. Examples of such methods which have been advanced are the inverse scattering method, the Hirota bilinear method, the simplified Hirota method, the Bäcklund transformation method, the Darboux transformation, the Pfaffian technique, the Painlevé analysis, the generalized symmetry method, the subsidiary ordinary differential equation method, the coupled amplitude-phase formulation, the sine-cosine method, the sech-tanh method, the mapping and deformation approach and many new other methods. The inverse scattering method, viewed as a nonlinear analogue of the Fourier transform method, is a powerful approach that demonstrates the existence of soliton solutions through intensive computations. At the center of the theory of integrable equations lies the bilinear forms and Hirota's direct method, which can be used to obtain soliton solutions by using exponentials. The Bäcklund transformation method is a useful invariant transformation that transforms one solution into another of a differential equation. The Darboux transformation method is a well known tool in the theory of integrable systems. It is believed that there is a connection between the Bäcklund transformation and the Darboux transformation, but it is as yet not known. Archetypes of integrable equations are the Korteweg-de Vries (KdV) equation, the modified KdV equation, the sine-Gordon equation, the Schrödinger equation, the Vakhnenko equation, the KdV6 equation, the Burgers equation, the fifth-order Lax equation and many others. These equations yield soliton solutions, multiple soliton solutions, breather solutions, quasi-periodic solutions, kink solutions, homo-clinic solutions and other solutions as well. The couplings of linear and nonlinear equations were recently discovered and subsequently received considerable attention. The concept of couplings forms a new direction for developing innovative construction methods. The recently obtained results in solitary waves theory highlight new approaches for additional creative ideas, promising further achievements and increased progress in this field. We are grateful to all of the authors who accepted our invitation to contribute to this comment section.
NASA Astrophysics Data System (ADS)
Ghanbari, Behzad; Inc, Mustafa
2018-04-01
The present paper suggests a novel technique to acquire exact solutions of nonlinear partial differential equations. The main idea of the method is to generalize the exponential rational function method. In order to examine the ability of the method, we consider the resonant nonlinear Schrödinger equation (R-NLSE). Many variants of exact soliton solutions for the equation are derived by the proposed method. Physical interpretations of some obtained solutions is also included. One can easily conclude that the new proposed method is very efficient and finds the exact solutions of the equation in a relatively easy way.
Numerical modeling of time-dependent bio-convective stagnation flow of a nanofluid in slip regime
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Sood, Shilpa; Shehzad, Sabir Ali; Sheikholeslami, Mohsen
A numerical investigation of unsteady stagnation point flow of bioconvective nanofluid due to an exponential deforming surface is made in this research. The effects of Brownian diffusion, thermophoresis, slip velocity and thermal jump are incorporated in the nanofluid model. By utilizing similarity transformations, the highly nonlinear partial differential equations governing present nano-bioconvective boundary layer phenomenon are reduced into ordinary differential system. The resultant expressions are solved for numerical solution by employing a well-known implicit finite difference approach termed as Keller-box method (KBM). The influence of involved parameters (unsteadiness, bioconvection Schmidt number, velocity slip, thermal jump, thermophoresis, Schmidt number, Brownian motion, bioconvection Peclet number) on the distributions of velocity, temperature, nanoparticle and motile microorganisms concentrations, the coefficient of local skin-friction, rate of heat transport, Sherwood number and local density motile microorganisms are exhibited through graphs and tables.
NASA Astrophysics Data System (ADS)
Zhai, Ding; Lu, Anyang; Li, Jinghao; Zhang, Qingling
2016-10-01
This paper deals with the problem of the fault detection (FD) for continuous-time singular switched linear systems with multiple time-varying delay. In this paper, the actuator fault is considered. Besides, the systems faults and unknown disturbances are assumed in known frequency domains. Some finite frequency performance indices are initially introduced to design the switched FD filters which ensure that the filtering augmented systems under switching signal with average dwell time are exponentially admissible and guarantee the fault input sensitivity and disturbance robustness. By developing generalised Kalman-Yakubovic-Popov lemma and using Parseval's theorem and Fourier transform, finite frequency delay-dependent sufficient conditions for the existence of such a filter which can guarantee the finite-frequency H- and H∞ performance are derived and formulated in terms of linear matrix inequalities. Four examples are provided to illustrate the effectiveness of the proposed finite frequency method.
Exponential Family Functional data analysis via a low-rank model.
Li, Gen; Huang, Jianhua Z; Shen, Haipeng
2018-05-08
In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Huang, Juntao; Shu, Chi-Wang
2018-05-01
In this paper, we develop bound-preserving modified exponential Runge-Kutta (RK) discontinuous Galerkin (DG) schemes to solve scalar hyperbolic equations with stiff source terms by extending the idea in Zhang and Shu [43]. Exponential strong stability preserving (SSP) high order time discretizations are constructed and then modified to overcome the stiffness and preserve the bound of the numerical solutions. It is also straightforward to extend the method to two dimensions on rectangular and triangular meshes. Even though we only discuss the bound-preserving limiter for DG schemes, it can also be applied to high order finite volume schemes, such as weighted essentially non-oscillatory (WENO) finite volume schemes as well.
NASA Astrophysics Data System (ADS)
Hashemi, R.; Dudaryonok, A. S.; Lavrentieva, N. N.; Vandaele, A. C.; Vander Auwera, J.; Tyuterev, AV Nikitin G., VI; Sung, K.; Smith, M. A. H.; Devi, V. M.; Predoi-Cross, A.
2017-02-01
Two atmospheric trace gases, namely methane and carbon monoxide have been considered in this study. Fourier transform absorption spectra of the 2-0 band of 12C16O mixed with CO2 have been recorded at total pressures from 156 to 1212 hPa and at 4 different temperatures between 240 K and 283 K. CO2 pressure-induced line broadening and line shift coefficients, and the associated temperature dependence have been measured in an multi-spectrum non-linear least squares analysis using Voigt profiles with an asymmetric profile due to line mixing. The measured CO2-broadening and CO2-shift parameters were compared with theoretical values, calculated by collaborators. In addition, the CO2-broadening and shift coefficients have been calculated for individual temperatures using the Exponential Power Gap (EPG) semi-empirical method. We also discuss the retrieved line shape parameters for Methane transitions in the spectral range known as the Methane Octad. We used high resolution spectra of pure methane and of dilute mixtures of methane in dry air, recorded with high signal to noise ratio at temperatures between 148 K and room temperature using the Bruker IFS 125 HR Fourier transform spectrometer (FTS) at the Jet Propulsion Laboratory, Pasadena, California. Theoretical calculations for line parameters have been performed and the results are compared with the previously published values and with the line parameters available in the GEISA2015 [1] and HITRAN2012 [2] databases.
NASA Astrophysics Data System (ADS)
Thomson, C. J.
2005-10-01
Several observations are made concerning the numerical implementation of wide-angle one-way wave equations, using for illustration scalar waves obeying the Helmholtz equation in two space dimensions. This simple case permits clear identification of a sequence of physically motivated approximations of use when the mathematically exact pseudo-differential operator (PSDO) one-way method is applied. As intuition suggests, these approximations largely depend on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow-angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so-called `standard-ordering' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane-wave synthesis lying at the heart of the calculations. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one-way propagator for the laterally varying case, representing the intuitive extension of classical integral-transform solutions for a laterally homogeneous medium. This exponential propagator permits larger forward stepsizes. Numerical comparisons with Helmholtz (i.e. full) wave-equation finite-difference solutions are presented for various canonical problems. These include propagation along an interfacial gradient, the effects of a compact inclusion and the formation of extended transmitted and backscattered wave trains by model roughness. The ideas extend to the 3-D, generally anisotropic case and to multiple scattering by invariant embedding. It is concluded that the method is very competitive, striking a new balance between simplifying approximations and computational labour. Complicated wave-scattering effects are retained without the need for expensive global solutions, providing a robust and flexible modelling tool.
Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.
2016-01-01
Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
Exponential Methods for the Time Integration of Schroedinger Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
Characterization of radiation belt electron energy spectra from CRRES observations
NASA Astrophysics Data System (ADS)
Johnston, W. R.; Lindstrom, C. D.; Ginet, G. P.
2010-12-01
Energetic electrons in the outer radiation belt and the slot region exhibit a wide variety of energy spectral forms, more so than radiation belt protons. We characterize the spatial and temporal dependence of these forms using observations from the CRRES satellite Medium Electron Sensor A (MEA) and High-Energy Electron Fluxmeter (HEEF) instruments, together covering an energy range 0.15-8 MeV. Spectra were classified with two independent methods, data clustering and curve-fitting analyses, in each case defining categories represented by power law, exponential, and bump-on-tail (BOT) or other complex shapes. Both methods yielded similar results, with BOT, exponential, and power law spectra respectively dominating in the slot region, outer belt, and regions just beyond the outer belt. The transition from exponential to power law spectra occurs at higher L for lower magnetic latitude. The location of the transition from exponential to BOT spectra is highly correlated with the location of the plasmapause. In the slot region during the days following storm events, electron spectra were observed to evolve from exponential to BOT yielding differential flux minima at 350-650 keV and maxima at 1.5-2 MeV; such evolution has been attributed to energy-dependent losses from scattering by whistler hiss.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, L.L.; Hendricks, J.S.
1983-01-01
The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.
Effect of local minima on adiabatic quantum optimization.
Amin, M H S
2008-04-04
We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.
Line transect estimation of population size: the exponential case with grouped data
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1979-01-01
Gates, Marshall, and Olson (1968) investigated the line transect method of estimating grouse population densities in the case where sighting probabilities are exponential. This work is followed by a simulation study in Gates (1969). A general overview of line transect analysis is presented by Burnham and Anderson (1976). These articles all deal with the ungrouped data case. In the present article, an analysis of line transect data is formulated under the Gates framework of exponential sighting probabilities and in the context of grouped data.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Elshambaky, Hossam Talaat
2018-01-01
Owing to the appearance of many global geopotential models, it is necessary to determine the most appropriate model for use in Egyptian territory. In this study, we aim to investigate three global models, namely EGM2008, EIGEN-6c4, and GECO. We use five mathematical transformation techniques, i.e., polynomial expression, exponential regression, least-squares collocation, multilayer feed forward neural network, and radial basis neural networks to make the conversion from regional geometrical geoid to global geoid models and vice versa. From a statistical comparison study based on quality indexes between previous transformation techniques, we confirm that the multilayer feed forward neural network with two neurons is the most accurate of the examined transformation technique, and based on the mean tide condition, EGM2008 represents the most suitable global geopotential model for use in Egyptian territory to date. The final product gained from this study was the corrector surface that was used to facilitate the transformation process between regional geometrical geoid model and the global geoid model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw
2014-03-14
The complex quantum Hamilton-Jacobi equation-Bohmian trajectories (CQHJE-BT) method is introduced as a synthetic trajectory method for integrating the complex quantum Hamilton-Jacobi equation for the complex action function by propagating an ensemble of real-valued correlated Bohmian trajectories. Substituting the wave function expressed in exponential form in terms of the complex action into the time-dependent Schrödinger equation yields the complex quantum Hamilton-Jacobi equation. We transform this equation into the arbitrary Lagrangian-Eulerian version with the grid velocity matching the flow velocity of the probability fluid. The resulting equation describing the rate of change in the complex action transported along Bohmian trajectories is simultaneouslymore » integrated with the guidance equation for Bohmian trajectories, and the time-dependent wave function is readily synthesized. The spatial derivatives of the complex action required for the integration scheme are obtained by solving one moving least squares matrix equation. In addition, the method is applied to the photodissociation of NOCl. The photodissociation dynamics of NOCl can be accurately described by propagating a small ensemble of trajectories. This study demonstrates that the CQHJE-BT method combines the considerable advantages of both the real and the complex quantum trajectory methods previously developed for wave packet dynamics.« less
NASA Astrophysics Data System (ADS)
Qian, Tingting; Wang, Lianlian; Lu, Guanghua
2017-07-01
Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.
Rouze, Ned C; Deng, Yufeng; Palmeri, Mark L; Nightingale, Kathryn R
2017-10-01
Recent measurements of shear wave propagation in viscoelastic materials have been analyzed by constructing the 2-D Fourier transform (2DFT) of the shear wave signal and measuring the phase velocity c(ω) and attenuation α(ω) from the peak location and full width at half-maximum (FWHM) of the 2DFT signal at discrete frequencies. However, when the shear wave is observed over a finite spatial range, the 2DFT signal is a convolution of the true signal and the observation window, and measurements using the FWHM can yield biased results. In this study, we describe a method to account for the size of the spatial observation window using a model of the 2DFT signal and a non-linear, least-squares fitting procedure to determine c(ω) and α(ω). Results from the analysis of finite-element simulation data agree with c(ω) and α(ω) calculated from the material parameters used in the simulation. Results obtained in a viscoelastic phantom indicate that the measured attenuation is independent of the observation window and agree with measurements of c(ω) and α(ω) obtained using the previously described progressive phase and exponential decay analysis. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
A method to calculate synthetic waveforms in stratified VTI media
NASA Astrophysics Data System (ADS)
Wang, W.; Wen, L.
2012-12-01
Transverse isotropy with a vertical axis of symmetry (VTI) may be an important material property in the Earth's interior. In this presentation, we develop a method to calculate synthetic seismograms for wave propagation in stratified VTI media. Our method is based on the generalized reflection and transmission method (GRTM) (Luco & Apsel 1983). We extend it to transversely isotropic VTI media. GRTM has the advantage of remaining stable in high frequency calculations compared to the Haskell Matrix method (Haskell 1964), which explicitly excludes the exponential growth terms in the propagation matrix and is limited to low frequency computation. In the implementation, we also improve GRTM in two aspects. 1) We apply the Shanks transformation (Bender & Orszag 1999) to improve the convergence rate of convergence. This improvement is especially important when the depths of source and receiver are close. 2) We adopt a self-adaptive Simpson integration method (Chen & Zhang 2001) in the discrete wavenumber integration so that the integration can still be efficiently carried out at large epicentral distances. Because the calculation is independent in each frequency, the program can also be effectively implemented in parallel computing. Our method provides a powerful tool to synthesize broadband seismograms of VIT media at a large epicenter distance range. We will present examples of using the method to study possible transverse isotropy in the upper mantle and the lowermost mantle.
NASA Astrophysics Data System (ADS)
Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.
2012-11-01
We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
Hesitant Fuzzy Thermodynamic Method for Emergency Decision Making Based on Prospect Theory.
Ren, Peijia; Xu, Zeshui; Hao, Zhinan
2017-09-01
Due to the timeliness of emergency response and much unknown information in emergency situations, this paper proposes a method to deal with the emergency decision making, which can comprehensively reflect the emergency decision making process. By utilizing the hesitant fuzzy elements to represent the fuzziness of the objects and the hesitant thought of the experts, this paper introduces the negative exponential function into the prospect theory so as to portray the psychological behaviors of the experts, which transforms the hesitant fuzzy decision matrix into the hesitant fuzzy prospect decision matrix (HFPDM) according to the expectation-levels. Then, this paper applies the energy and the entropy in thermodynamics to take the quantity and the quality of the decision values into account, and defines the thermodynamic decision making parameters based on the HFPDM. Accordingly, a whole procedure for emergency decision making is conducted. What is more, some experiments are designed to demonstrate and improve the validation of the emergency decision making procedure. Last but not the least, this paper makes a case study about the emergency decision making in the firing and exploding at Port Group in Tianjin Binhai New Area, which manifests the effectiveness and practicability of the proposed method.
NASA Astrophysics Data System (ADS)
Nerantzaki, Sofia; Papalexiou, Simon Michael
2017-04-01
Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
A Fourier method for the analysis of exponential decay curves.
Provencher, S W
1976-01-01
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.
Exponential gain of randomness certified by quantum contextuality
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan
2017-04-01
We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.
NASA Technical Reports Server (NTRS)
Berkin, Andrew L.; Maeda, Kei-Ichi; Yokoyama, Junichi
1990-01-01
The cosmology resulting from two coupled scalar fields was studied, one which is either a new inflation or chaotic type inflation, and the other which has an exponentially decaying potential. Such a potential may appear in the conformally transformed frame of generalized Einstein theories like the Jordan-Brans-Dicke theory. The constraints necessary for successful inflation are examined. Conventional GUT models such as SU(5) were found to be compatible with new inflation, while restrictions on the self-coupling constant are significantly loosened for chaotic inflation.
NASA Technical Reports Server (NTRS)
Berkin, Andrew L.; Maeda, Kei-Ichi; Yokoyama, Jun'ichi
1990-01-01
The cosmology resulting from two coupled scalar fields was studied, one which is either a new inflation or chaotic type inflation, and the other which has an exponentially decaying potential. Such a potential may appear in the conformally transformed frame of generalized Einstein theories like the Jordan-Brans-Dicke theory. The constraints necessary for successful inflation are examined. Conventional GUT models such as SU(5) were found to be compatible with new inflation, while restrictions on the self-coupling constant are significantly loosened for chaotic inflation.
Exceptional point in a simple textbook example
NASA Astrophysics Data System (ADS)
Fernández, Francisco M.
2018-07-01
We propose to introduce the concept of exceptional points in intermediate courses on mathematics and classical mechanics by means of simple textbook examples. The first one is an ordinary second-order differential equation with constant coefficients. The second one is the well-known damped harmonic oscillator. From a strict mathematical viewpoint both are the same problem that enables one to connect the occurrence of linearly dependent exponential solutions with a defective matrix which cannot be diagonalized but can be transformed into a Jordan canonical form.
Changing Mindsets to Transform Security: Leader Development for an Unpredictable and Complex World
2013-01-01
fields of phys- ical science, the amount of information is doubling every one to two years, meaning that more than half of what a college student has...beyond a review of current events or it being at a “ informational ” level. Naval War College Professor Mackubin Owens stated in 2006, that, The new... information technology in education and training underpinned by a sta- ble and experienced academic community that can support the exponential growth
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
A method for exponential propagation of large systems of stiff nonlinear differential equations
NASA Technical Reports Server (NTRS)
Friesner, Richard A.; Tuckerman, Laurette S.; Dornblaser, Bright C.; Russo, Thomas V.
1989-01-01
A new time integrator for large, stiff systems of linear and nonlinear coupled differential equations is described. For linear systems, the method consists of forming a small (5-15-term) Krylov space using the Jacobian of the system and carrying out exact exponential propagation within this space. Nonlinear corrections are incorporated via a convolution integral formalism; the integral is evaluated via approximate Krylov methods as well. Gains in efficiency ranging from factors of 2 to 30 are demonstrated for several test problems as compared to a forward Euler scheme and to the integration package LSODE.
Deductibles in health insurance
NASA Astrophysics Data System (ADS)
Dimitriyadis, I.; Öney, Ü. N.
2009-11-01
This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.
Observing in space and time the ephemeral nucleation of liquid-to-crystal phase transitions.
Yoo, Byung-Kuk; Kwon, Oh-Hoon; Liu, Haihua; Tang, Jau; Zewail, Ahmed H
2015-10-19
The phase transition of crystalline ordering is a general phenomenon, but its evolution in space and time requires microscopic probes for visualization. Here we report direct imaging of the transformation of amorphous titanium dioxide nanofilm, from the liquid state, passing through the nucleation step and finally to the ordered crystal phase. Single-pulse transient diffraction profiles at different times provide the structural transformation and the specific degree of crystallinity (η) in the evolution process. It is found that the temporal behaviour of η exhibits unique 'two-step' dynamics, with a robust 'plateau' that extends over a microsecond; the rate constants vary by two orders of magnitude. Such behaviour reflects the presence of intermediate structure(s) that are the precursor of the ordered crystal state. Theoretically, we extend the well-known Johnson-Mehl-Avrami-Kolmogorov equation, which describes the isothermal process with a stretched-exponential function, but here over the range of times covering the melt-to-crystal transformation.
NASA Astrophysics Data System (ADS)
Golub, V. P.; Pavlyuk, Ya. V.; Fernati, P. V.
2017-07-01
The problem of determining the parameters of fractional-exponential heredity kernels of nonlinear viscoelastic materials is solved. The methods for determining the parameters that are used in the cubic theory of viscoelasticity and the nonlinear theories based on the conditions of similarity of primary creep curves and isochronous creep diagrams are analyzed. The parameters of fractional-exponential heredity kernels are determined and experimentally validated for the oriented polypropylene, FM3001 and FM10001 nylon fibers, microplastics, TC 8/3-250 glass-reinforced plastic, SWAM glass-reinforced plastic, and contact molding glass-reinforced plastic.
Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field
NASA Astrophysics Data System (ADS)
Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi
2018-02-01
We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.
Zhao, Shouwei
2011-06-01
A Lie algebraic condition for global exponential stability of linear discrete switched impulsive systems is presented in this paper. By considering a Lie algebra generated by all subsystem matrices and impulsive matrices, when not all of these matrices are Schur stable, we derive new criteria for global exponential stability of linear discrete switched impulsive systems. Moreover, simple sufficient conditions in terms of Lie algebra are established for the synchronization of nonlinear discrete systems using a hybrid switching and impulsive control. As an application, discrete chaotic system's synchronization is investigated by the proposed method.
Hosseinzadeh, M; Ghoreishi, M; Narooei, K
2016-06-01
In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
NASA Astrophysics Data System (ADS)
Shiau, Lie-Ding
2016-09-01
The pre-exponential factor and interfacial energy obtained from the metastable zone width (MSZW) data using the integral method proposed by Shiau and Lu [1] are compared in this study with those obtained from the induction time data using the conventional method (ti ∝J-1) for three crystallization systems, including potassium sulfate in water in a 200 mL vessel, borax decahydrate in water in a 100 mL vessel and butyl paraben in ethanol in a 5 mL tube. The results indicate that the pre-exponential factor and interfacial energy calculated from the induction time data based on classical nucleation theory are consistent with those calculated from the MSZW data using the same detection technique for the studied systems.
Zhang, Wei; Huang, Tingwen; He, Xing; Li, Chuandong
2017-11-01
In this study, we investigate the global exponential stability of inertial memristor-based neural networks with impulses and time-varying delays. We construct inertial memristor-based neural networks based on the characteristics of the inertial neural networks and memristor. Impulses with and without delays are considered when modeling the inertial neural networks simultaneously, which are of great practical significance in the current study. Some sufficient conditions are derived under the framework of the Lyapunov stability method, as well as an extended Halanay differential inequality and a new delay impulsive differential inequality, which depend on impulses with and without delays, in order to guarantee the global exponential stability of the inertial memristor-based neural networks. Finally, two numerical examples are provided to illustrate the efficiency of the proposed methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
Global exponential stability of BAM neural networks with time-varying delays and diffusion terms
NASA Astrophysics Data System (ADS)
Wan, Li; Zhou, Qinghua
2007-11-01
The stability property of bidirectional associate memory (BAM) neural networks with time-varying delays and diffusion terms are considered. By using the method of variation parameter and inequality technique, the delay-independent sufficient conditions to guarantee the uniqueness and global exponential stability of the equilibrium solution of such networks are established.
Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation
1990-05-01
process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE
Dendrimers and methods of preparing same through proportionate branching
Yu, Yihua; Yue, Xuyi
2015-09-15
The present invention provides for monodispersed dendrimers having a core, branches and periphery ends, wherein the number of branches increases exponentially from the core to the periphery end and the length of the branches increases exponentially from the periphery end to the core, thereby providing for attachment of chemical species at the periphery ends without exhibiting steric hindrance.
Conditional optimal spacing in exponential distribution.
Park, Sangun
2006-12-01
In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.
NASA Astrophysics Data System (ADS)
Santa Vélez, Camilo; Enea Romano, Antonio
2018-05-01
Static coordinates can be convenient to solve the vacuum Einstein's equations in presence of spherical symmetry, but for cosmological applications comoving coordinates are more suitable to describe an expanding Universe, especially in the framework of cosmological perturbation theory (CPT). Using CPT we develop a method to transform static spherically symmetric (SSS) modifications of the de Sitter solution from static coordinates to the Newton gauge. We test the method with the Schwarzschild de Sitter (SDS) metric and then derive general expressions for the Bardeen's potentials for a class of SSS metrics obtained by adding to the de Sitter metric a term linear in the mass and proportional to a general function of the radius. Using the gauge invariance of the Bardeen's potentials we then obtain a gauge invariant definition of the turn around radius. We apply the method to an SSS solution of the Brans-Dicke theory, confirming the results obtained independently by solving the perturbation equations in the Newton gauge. The Bardeen's potentials are then derived for new SSS metrics involving logarithmic, power law and exponential modifications of the de Sitter metric. We also apply the method to SSS metrics which give flat rotation curves, computing the radial energy density profile in comoving coordinates in presence of a cosmological constant.
NASA Astrophysics Data System (ADS)
Abro, Kashif Ali; Memon, Anwar Ahmed; Uqaili, Muhammad Aslam
2018-03-01
This research article is analyzed for the comparative study of RL and RC electrical circuits by employing newly presented Atangana-Baleanu and Caputo-Fabrizio fractional derivatives. The governing ordinary differential equations of RL and RC electrical circuits have been fractionalized in terms of fractional operators in the range of 0 ≤ ξ ≤ 1 and 0 ≤ η ≤ 1. The analytic solutions of fractional differential equations for RL and RC electrical circuits have been solved by using the Laplace transform with its inversions. General solutions have been investigated for periodic and exponential sources by implementing the Atangana-Baleanu and Caputo-Fabrizio fractional operators separately. The investigated solutions have been expressed in terms of simple elementary functions with convolution product. On the basis of newly fractional derivatives with and without singular kernel, the voltage and current have interesting behavior with several similarities and differences for the periodic and exponential sources.
NASA Astrophysics Data System (ADS)
Jia, Heping; Yang, Rongcao; Tian, Jinping; Zhang, Wenmei
2018-05-01
The nonautonomous nonlinear Schrödinger (NLS) equation with both varying linear and harmonic external potentials is investigated and the semirational rogue wave (RW) solution is presented by similarity transformation. Based on the solution, the interactions between Peregrine soliton and breathers, and the controllability of the semirational RWs in periodic distribution and exponential decreasing nonautonomous systems with both linear and harmonic potentials are studied. It is found that the harmonic potential only influences the constraint condition of the semirational solution, the linear potential is related to the trajectory of the semirational RWs, while dispersion and nonlinearity determine the excitation position of the higher-order RWs. The higher-order RWs can be partly, completely and biperiodically excited in periodic distribution system and the diverse excited patterns can be generated for different parameter relations in exponential decreasing system. The results reveal that the excitation of the higher-order RWs can be controlled in the nonautonomous system by choosing dispersion, nonlinearity and external potentials.
NASA Astrophysics Data System (ADS)
Du, Zhong; Tian, Bo; Wu, Xiao-Yu; Yuan, Yu-Qiang
2018-05-01
Studied in this paper is a (2+1)-dimensional coupled nonlinear Schrödinger system with variable coefficients, which describes the propagation of an optical beam inside the two-dimensional graded-index waveguide amplifier with the polarization effects. According to the similarity transformation, we derive the type-I and type-II rogue-wave solutions. We graphically present two types of the rouge wave and discuss the influence of the diffraction parameter on the rogue waves. When the diffraction parameters are exponentially-growing-periodic, exponential, linear and quadratic parameters, we obtain the periodic rogue wave and composite rogue waves respectively. Supported by the National Natural Science Foundation of China under Grant Nos. 11772017, 11272023, and 11471050, by the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China (IPOC: 2017ZZ05) and by the Fundamental Research Funds for the Central Universities of China under Grant No. 2011BUPTYB02.
Takatsu, Yasuo; Ueyama, Tsuyoshi; Miyati, Tosiaki; Yamamura, Kenichirou
2016-12-01
The image characteristics in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) depend on the partial Fourier fraction and contrast medium concentration. These characteristics were assessed and the modulation transfer function (MTF) was calculated by computer simulation. A digital phantom was created from signal intensity data acquired at different contrast medium concentrations on a breast model. The frequency images [created by fast Fourier transform (FFT)] were divided into 512 parts and rearranged to form a new image. The inverse FFT of this image yielded the MTF. From the reference data, three linear models (low, medium, and high) and three exponential models (slow, medium, and rapid) of the signal intensity were created. Smaller partial Fourier fractions, and higher gradients in the linear models, corresponded to faster MTF decline. The MTF more gradually decreased in the exponential models than in the linear models. The MTF, which reflects the image characteristics in DCE-MRI, was more degraded as the partial Fourier fraction decreased.
Improving deep convolutional neural networks with mixed maxout units
Liu, Fu-xian; Li, Long-yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Attractors of three-dimensional fast-rotating Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Trahe, Markus
The three-dimensional (3-D) rotating Navier-Stokes equations describe the dynamics of rotating, incompressible, viscous fluids. In this work, they are considered with smooth, time-independent forces and the original statements implied by the classical "Taylor-Proudman Theorem" of geophysics are rigorously proved. It is shown that fully developed turbulence of 3-D fast-rotating fluids is essentially characterized by turbulence of two-dimensional (2-D) fluids in terms of numbers of degrees of freedom. In this context, the 3-D nonlinear "resonant limit equations", which arise in a non-linear averaging process as the rotation frequency O → infinity, are studied and optimal (2-D-type) upper bounds for fractal box and Hausdorff dimensions of the global attractor as well as upper bounds for box dimensions of exponential attractors are determined. Then, the convergence of exponential attractors for the full 3-D rotating Navier-Stokes equations to exponential attractors for the resonant limit equations as O → infinity in the sense of full Hausdorff-metric distances is established. This provides upper and lower semi-continuity of exponential attractors with respect to the rotation frequency and implies that the number of degrees of freedom (attractor dimension) of 3-D fast-rotating fluids is close to that of 2-D fluids. Finally, the algebraic-geometric structure of the Poincare curves, which control the resonances and small divisor estimates for partial differential equations, is further investigated; the 3-D nonlinear limit resonant operators are characterized by three-wave interactions governed by these curves. A new canonical transformation between those curves is constructed; with far-reaching consequences on the density of the latter.
NASA Astrophysics Data System (ADS)
Valkunde, Amol T.; Vhanmore, Bandopant D.; Urunkar, Trupti U.; Gavade, Kusum M.; Patil, Sandip D.; Takale, Mansing V.
2018-05-01
In this work, nonlinear aspects of a high intensity q-Gaussian laser beam propagating in collisionless plasma having upward density ramp of exponential profiles is studied. We have employed the nonlinearity in dielectric function of plasma by considering ponderomotive nonlinearity. The differential equation governing the dimensionless beam width parameter is achieved by using Wentzel-Kramers-Brillouin (WKB) and paraxial approximations and solved it numerically by using Runge-Kutta fourth order method. Effect of exponential density ramp profile on self-focusing of q-Gaussian laser beam for various values of q is systematically carried out and compared with results Gaussian laser beam propagating in collisionless plasma having uniform density. It is found that exponential plasma density ramp causes the laser beam to become more focused and gives reasonably interesting results.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
A note on the accuracy of spectral method applied to nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Wong, Peter S.
1994-01-01
Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.
Deformed exponentials and portfolio selection
NASA Astrophysics Data System (ADS)
Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro
In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.
Diffusion orientation transform revisited.
Canales-Rodríguez, Erick Jorge; Lin, Ching-Po; Iturria-Medina, Yasser; Yeh, Chun-Hung; Cho, Kuan-Hung; Melie-García, Lester
2010-01-15
Diffusion orientation transform (DOT) is a powerful imaging technique that allows the reconstruction of the microgeometry of fibrous tissues based on diffusion MRI data. The three main error sources involving this methodology are the finite sampling of the q-space, the practical truncation of the series of spherical harmonics and the use of a mono-exponential model for the attenuation of the measured signal. In this work, a detailed mathematical description that provides an extension to the DOT methodology is presented. In particular, the limitations implied by the use of measurements with a finite support in q-space are investigated and clarified as well as the impact of the harmonic series truncation. Near- and far-field analytical patterns for the diffusion propagator are examined. The near-field pattern makes available the direct computation of the probability of return to the origin. The far-field pattern allows probing the limitations of the mono-exponential model, which suggests the existence of a limit of validity for DOT. In the regimen from moderate to large displacement lengths the isosurfaces of the diffusion propagator reveal aberrations in form of artifactual peaks. Finally, the major contribution of this work is the derivation of analytical equations that facilitate the accurate reconstruction of some orientational distribution functions (ODFs) and skewness ODFs that are relatively immune to these artifacts. The new formalism was tested using synthetic and real data from a phantom of intersecting capillaries. The results support the hypothesis that the revisited DOT methodology could enhance the estimation of the microgeometry of fiber tissues.
Applications of an exponential finite difference technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Handschuh, R.F.; Keith, T.G. Jr.
1988-07-01
An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.
Salje, Ekhard K H; Planes, Antoni; Vives, Eduard
2017-10-01
Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.
A new approach to the extraction of single exponential diode model parameters
NASA Astrophysics Data System (ADS)
Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.
2018-06-01
A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degroote, M.; Henderson, T. M.; Zhao, J.
We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less
The mechanism of double-exponential growth in hyper-inflation
NASA Astrophysics Data System (ADS)
Mizuno, T.; Takayasu, M.; Takayasu, H.
2002-05-01
Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.
2015-01-01
This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857
Déjardin, P
2013-08-30
The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.
Existence and exponential stability of traveling waves for delayed reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Hsu, Cheng-Hsiung; Yang, Tzi-Sheng; Yu, Zhixian
2018-03-01
The purpose of this work is to investigate the existence and exponential stability of traveling wave solutions for general delayed multi-component reaction-diffusion systems. Following the monotone iteration scheme via an explicit construction of a pair of upper and lower solutions, we first obtain the existence of monostable traveling wave solutions connecting two different equilibria. Then, applying the techniques of weighted energy method and comparison principle, we show that all solutions of the Cauchy problem for the considered systems converge exponentially to traveling wave solutions provided that the initial perturbations around the traveling wave fronts belong to a suitable weighted Sobolev space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjeev, E-mail: sanjeevsharma145@gmail.com; Kumar, Rajendra, E-mail: khundrakpam-ss@yahoo.com; Singh, Kh. S., E-mail: khundrakpam-ss@yahoo.com
A simple design of broadband one dimensional dielectric/semiconductor multilayer structure having refractive index profile of exponentially graded material has been proposed. The theoretical analysis shows that the proposed structure works as a perfect mirror within a certain wavelength range (1550 nm). In order to calculate the reflection properties a transfer matrix method (TMM) has been used. This property shows that binary graded photonic crystal structures have widened omnidirectional reflector (ODR) bandgap. Hence a exponentially graded photonic crystal structure can be used as a broadband optical reflector and the range of reflection can be tuned to any wavelength region by varying themore » refractive index profile of exponentially graded photonic crystal structure.« less
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
NASA Astrophysics Data System (ADS)
Gonizzi Barsanti, S.; Guidi, G.
2017-02-01
Conservation of Cultural Heritage is a key issue and structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of Finite Elements Methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The typical workflow involves the use of CAD 3D models made by Non-Uniform Rational B-splines (NURBS) surfaces, representing the ideal shape of the object to be simulated. Nowadays, 3D documentation of CH has been widely developed through reality-based approaches, but the models are not suitable for a direct use in FEA: the mesh has in fact to be converted to volumetric, and the density has to be reduced since the computational complexity of a FEA grows exponentially with the number of nodes. The focus of this paper is to present a new method aiming at generate the most accurate 3D representation of a real artefact from highly accurate 3D digital models derived from reality-based techniques, maintaining the accuracy of the high-resolution polygonal models in the solid ones. The approach proposed is based on a wise use of retopology procedures and a transformation of this model to a mathematical one made by NURBS surfaces suitable for being processed by volumetric meshers typically embedded in standard FEM packages. The strong simplification with little loss of consistency possible with the retopology step is used for maintaining as much coherence as possible between the original acquired mesh and the simplified model, creating in the meantime a topology that is more favourable for the automatic NURBS conversion.
On detection and visualization techniques for cyber security situation awareness
NASA Astrophysics Data System (ADS)
Yu, Wei; Wei, Shixiao; Shen, Dan; Blowers, Misty; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe; Zhang, Hanlin; Lu, Chao
2013-05-01
Networking technologies are exponentially increasing to meet worldwide communication requirements. The rapid growth of network technologies and perversity of communications pose serious security issues. In this paper, we aim to developing an integrated network defense system with situation awareness capabilities to present the useful information for human analysts. In particular, we implement a prototypical system that includes both the distributed passive and active network sensors and traffic visualization features, such as 1D, 2D and 3D based network traffic displays. To effectively detect attacks, we also implement algorithms to transform real-world data of IP addresses into images and study the pattern of attacks and use both the discrete wavelet transform (DWT) based scheme and the statistical based scheme to detect attacks. Through an extensive simulation study, our data validate the effectiveness of our implemented defense system.
LORETA EEG phase reset of the default mode network
Thatcher, Robert W.; North, Duane M.; Biver, Carl J.
2014-01-01
Objectives: The purpose of this study was to explore phase reset of 3-dimensional current sources in Brodmann areas located in the human default mode network (DMN) using Low Resolution Electromagnetic Tomography (LORETA) of the human electroencephalogram (EEG). Methods: The EEG was recorded from 19 scalp locations from 70 healthy normal subjects ranging in age from 13 to 20 years. A time point by time point computation of LORETA current sources were computed for 14 Brodmann areas comprising the DMN in the delta frequency band. The Hilbert transform of the LORETA time series was used to compute the instantaneous phase differences between all pairs of Brodmann areas. Phase shift and lock durations were calculated based on the 1st and 2nd derivatives of the time series of phase differences. Results: Phase shift duration exhibited three discrete modes at approximately: (1) 25 ms, (2) 50 ms, and (3) 65 ms. Phase lock duration present primarily at: (1) 300–350 ms and (2) 350–450 ms. Phase shift and lock durations were inversely related and exhibited an exponential change with distance between Brodmann areas. Conclusions: The results are explained by local neural packing density of network hubs and an exponential decrease in connections with distance from a hub. The results are consistent with a discrete temporal model of brain function where anatomical hubs behave like a “shutter” that opens and closes at specific durations as nodes of a network giving rise to temporarily phase locked clusters of neurons for specific durations. PMID:25100976
Synthesis, characterization and optical properties of NH4Dy(PO3)4
NASA Astrophysics Data System (ADS)
Chemingui, S.; Ferhi, M.; Horchani-Naifer, K.; Férid, M.
2014-09-01
Polycrystalline powders of NH4Dy(PO3)4 polyphosphate have been grown by the flux method. This compound was found to be isotopic with NH4Ce(PO3)4 and RbHo(PO3)4. It crystallizes in the monoclinic space group P21/n with unit cell parameters a=10.474(6) Å, b=9.011(4) Å, c=10.947(7) Å and β=106.64(3)°. The title compound has been transformed to triphosphate Dy(PO3)3 after calcination at 800 °C. Powder X-ray diffraction, infrared and Raman spectroscopies and the differential thermal analysis have been used to identify these materials. The spectroscopic properties have been investigated through absorption, excitation, emission spectra and decay curves of Dy3+ ion in both compounds at room temperature. The emission spectra show the characteristic emission bands of Dy3+ in the two compounds, before and after calcination. The integrated emission intensity ratios of the yellow to blue (IY/IB) transitions and the chromaticity properties have been determined from emission spectra. The decay curves are found to be double-exponential. The non-exponential behavior of the decay rates was related to the resonant energy transfer as well as cross-relaxation between the donor and acceptor Dy3+ ions. The determined properties have been discussed as function of crystal structure of both compounds. They reveal that NH4Dy(PO3)4 is promising for white light generation but Dy(PO3)3 is potential candidates in field emission display (FED) and plasma display panel (PDP) devices.
Cai, Zuowei; Huang, Lihong; Zhang, Lingling
2015-05-01
This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, Y.; Horwitz, L. P.; Eisenberg, E.
We discuss the quantum Lax-Phillips theory of scattering and unstable systems. In this framework, the decay of an unstable system is described by a semigroup. The spectrum of the generator of the semigroup corresponds to the singularities of the Lax-Phillips S-matrix. In the case of discrete (complex) spectrum of the generator of the semigroup, associated with resonances, the decay law is exactly exponential. The states corresponding to these resonances (eigenfunctions of the generator of the semigroup) lie in the Lax-Phillips Hilbert space, and therefore all physical properties of the resonant states can be computed. We show that the Lax-Phillips S-matrixmore » is unitarily related to the S-matrix of standard scattering theory by a unitary transformation parametrized by the spectral variable σ of the Lax-Phillips theory. Analytic continuation in σ has some of the properties of a method developed some time ago for application to dilation analytic potentials. We work out an illustrative example using a Lee-Friedrichs model for the underlying dynamical system.« less
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
The influence of Ce doping of titania on the photodegradation of phenol.
Martin, Marcela V; Villabrille, Paula I; Rosso, Janina A
2015-09-01
Pure and cerium-doped [0.05, 0.1, 0.3, 0.5, and 1.0 Ce nominal atomic % (at.%)] TiO2 was synthesized by the sol-gel method. The obtained catalysts were characterized by X-ray diffraction (XRD), UV-visible diffused reflectance spectroscopy (DRS), Raman, and BET surface area measurement. The photocatalytic activity of synthesized samples for the oxidative degradation of phenol in aqueous suspension was investigated. The content of Ce in the catalysts increases both the transition temperature for anatase to rutile phase transformation and the specific surface area, and decreases the crystallite size of anatase phase, the crystallinity, and the band gap energy value. The material with higher efficiency corresponds to 0.1 Ce nominal at.%. Under irradiation with 350 nm lamps, the degradation of phenol could be described as an exponential trend, with an apparent rate constant of (9.1 ± 0.6) 10(-3) s(-1) (r(2) = 0.98). Hydroquinone was identified as the main intermediate.
Detection and measurement of the intracellular calcium variation in follicular cells.
Herrera-Navarro, Ana M; Terol-Villalobos, Iván R; Jiménez-Hernández, Hugo; Peregrina-Barreto, Hayde; Gonzalez-Barboza, José-Joel
2014-01-01
This work presents a new method for measuring the variation of intracellular calcium in follicular cells. The proposal consists in two stages: (i) the detection of the cell's nuclei and (ii) the analysis of the fluorescence variations. The first stage is performed via watershed modified transformation, where the process of labeling is controlled. The detection process uses the contours of the cells as descriptors, where they are enhanced with a morphological filter that homogenizes the luminance variation of the image. In the second stage, the fluorescence variations are modeled as an exponential decreasing function, where the fluorescence variations are highly correlated with the changes of intracellular free Ca(2+). Additionally, it is introduced a new morphological called medium reconstruction process, which helps to enhance the data for the modeling process. This filter exploits the undermodeling and overmodeling properties of reconstruction operators, such that it preserves the structure of the original signal. Finally, an experimental process shows evidence of the capabilities of the proposal.
Detection and Measurement of the Intracellular Calcium Variation in Follicular Cells
Herrera-Navarro, Ana M.; Terol-Villalobos, Iván R.; Jiménez-Hernández, Hugo; Peregrina-Barreto, Hayde; Gonzalez-Barboza, José-Joel
2014-01-01
This work presents a new method for measuring the variation of intracellular calcium in follicular cells. The proposal consists in two stages: (i) the detection of the cell's nuclei and (ii) the analysis of the fluorescence variations. The first stage is performed via watershed modified transformation, where the process of labeling is controlled. The detection process uses the contours of the cells as descriptors, where they are enhanced with a morphological filter that homogenizes the luminance variation of the image. In the second stage, the fluorescence variations are modeled as an exponential decreasing function, where the fluorescence variations are highly correlated with the changes of intracellular free Ca2+. Additionally, it is introduced a new morphological called medium reconstruction process, which helps to enhance the data for the modeling process. This filter exploits the undermodeling and overmodeling properties of reconstruction operators, such that it preserves the structure of the original signal. Finally, an experimental process shows evidence of the capabilities of the proposal. PMID:25342958
NASA Astrophysics Data System (ADS)
Gutiérrez, J. M.; Primo, C.; Rodríguez, M. A.; Fernández, J.
2008-02-01
We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms) diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.
Molecular dynamics study of strain-induced diffusivity of nitrogen in pure iron nanocrystalline
NASA Astrophysics Data System (ADS)
Mohammadzadeh, Roghayeh; Razmara, Naiyer; Razmara, Fereshteh
2016-12-01
In the present study, the self-diffusion process of nitrogen in pure iron nanocrystalline under strain conditions has been investigated by Molecular Dynamics (MD). The interactions between particles are modeled using Modified Embedded Atom Method (MEAM). Mean Square Displacement (MSD) of nitrogen in iron structure under strain is calculated. Strain is applied along [ 11 2 ¯ 0 ] and [ 0001 ] directions in both tensile and compression conditions. The activation energy and pre-exponential diffusion factor for nitrogen diffusion is comparatively high along [ 0001 ] direction of compressed structure of iron. The strain-induced diffusion coefficient at 973 K under the compression rate of 0.001 Å/ps along [ 0001 ] direction is about 6.72E-14 m2/s. The estimated activation energy of nitrogen under compression along [ 0001 ] direction is equal to 12.39 kcal/mol. The higher activation energy might be due to the fact that the system transforms into a more dense state when compressive stress is applied.
Calculation of effective penetration depth in X-ray diffraction for pharmaceutical solids.
Liu, Jodi; Saw, Robert E; Kiang, Y-H
2010-09-01
The use of the glancing incidence X-ray diffraction configuration to depth profile surface phase transformations is of interest to pharmaceutical scientists. The Parratt equation has been used to depth profile phase changes in pharmaceutical compacts. However, it was derived to calculate 1/e penetration at glancing incident angles slightly below the critical angle of condensed matter and is, therefore, applicable to surface studies of materials such as single crystalline nanorods and metal thin films. When the depth of interest is 50-200 microm into the surface, which is typical for pharmaceutical solids, the 1/e penetration depth, or skin depth, can be directly calculated from an exponential absorption law without utilizing the Parratt equation. In this work, we developed a more relevant method to define X-ray penetration depth based on the signal detection limits of the X-ray diffractometer. Our definition of effective penetration depth was empirically verified using bilayer compacts of varying known thicknesses of mannitol and lactose.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
Exponential Nutrient Loading as a Means to Optimize Bareroot Nursery Fertility of Oak Species
Zonda K. D. Birge; Douglass F. Jacobs; Francis K. Salifu
2006-01-01
Conventional fertilization in nursery culture of hardwoods may involve supply of equal fertilizer doses at regularly spaced intervals during the growing season, which may create a surplus of available nutrients in the beginning and a deficiency in nutrient availability by the end of the growing season. A method of fertilization termed âexponential nutrient loadingâ has...
A Spectral Lyapunov Function for Exponentially Stable LTV Systems
NASA Technical Reports Server (NTRS)
Zhu, J. Jim; Liu, Yong; Hang, Rui
2010-01-01
This paper presents the formulation of a Lyapunov function for an exponentially stable linear timevarying (LTV) system using a well-defined PD-spectrum and the associated PD-eigenvectors. It provides a bridge between the first and second methods of Lyapunov for stability assessment, and will find significant applications in the analysis and control law design for LTV systems and linearizable nonlinear time-varying systems.
Redshift data and statistical inference
NASA Technical Reports Server (NTRS)
Newman, William I.; Haynes, Martha P.; Terzian, Yervant
1994-01-01
Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.
O (a) improvement of 2D N = (2 , 2) lattice SYM theory
NASA Astrophysics Data System (ADS)
Hanada, Masanori; Kadoh, Daisuke; Matsuura, So; Sugino, Fumihiko
2018-04-01
We perform a tree-level O (a) improvement of two-dimensional N = (2 , 2) supersymmetric Yang-Mills theory on the lattice, motivated by the fast convergence in numerical simulations. The improvement respects an exact supersymmetry Q which is needed for obtaining the correct continuum limit without a parameter fine tuning. The improved lattice action is given within a milder locality condition in which the interactions are decaying as the exponential of the distance on the lattice. We also prove that the path-integral measure is invariant under the improved Q-transformation.
More on rotations as spin matrix polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtright, Thomas L.
2015-09-15
Any nonsingular function of spin j matrices always reduces to a matrix polynomial of order 2j. The challenge is to find a convenient form for the coefficients of the matrix polynomial. The theory of biorthogonal systems is a useful framework to meet this challenge. Central factorial numbers play a key role in the theoretical development. Explicit polynomial coefficients for rotations expressed either as exponentials or as rational Cayley transforms are considered here. Structural features of the results are discussed and compared, and large j limits of the coefficients are examined.
NASA Technical Reports Server (NTRS)
Rasquin, J. R.; Estes, M. F. (Inventor)
1973-01-01
A description is given of a device and process for making industrial diamonds. The device is composed of an exponential horn tapering from a large end to a small end, with a copper plate against the large end. A magnetic hammer abuts the copper plate. The copper plate and magnetic hammer function together to create a shock wave at the large end of the horn. As the wave propagates to the small end, the extreme pressure and temperature caused by the wave transforms the graphite, present in an anvil pocket at the small end, into diamonds.
Small-Maturity Asymptotics for the At-The-Money Implied Volatility Slope in Lévy Models
Gerhold, Stefan; Gülüm, I. Cetin; Pinter, Arpad
2016-01-01
ABSTRACT We consider the at-the-money (ATM) strike derivative of implied volatility as the maturity tends to zero. Our main results quantify the behaviour of the slope for infinite activity exponential Lévy models including a Brownian component. As auxiliary results, we obtain asymptotic expansions of short maturity ATM digital call options, using Mellin transform asymptotics. Finally, we discuss when the ATM slope is consistent with the steepness of the smile wings, as given by Lee’s moment formula. PMID:27660537
Gradient-based adaptation of general gaussian kernels.
Glasmachers, Tobias; Igel, Christian
2005-10-01
Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.
Juras, Vladimir; Apprich, Sebastian; Szomolanyi, Pavol; Bieri, Oliver; Deligianni, Xeni; Trattnig, Siegfried
2013-10-01
To compare mono- and bi-exponential T2 analysis in healthy and degenerated Achilles tendons using a recently introduced magnetic resonance variable-echo-time sequence (vTE) for T2 mapping. Ten volunteers and ten patients were included in the study. A variable-echo-time sequence was used with 20 echo times. Images were post-processed with both techniques, mono- and bi-exponential [T2 m, short T2 component (T2 s) and long T2 component (T2 l)]. The number of mono- and bi-exponentially decaying pixels in each region of interest was expressed as a ratio (B/M). Patients were clinically assessed with the Achilles Tendon Rupture Score (ATRS), and these values were correlated with the T2 values. The means for both T2 m and T2 s were statistically significantly different between patients and volunteers; however, for T2 s, the P value was lower. In patients, the Pearson correlation coefficient between ATRS and T2 s was -0.816 (P = 0.007). The proposed variable-echo-time sequence can be successfully used as an alternative method to UTE sequences with some added benefits, such as a short imaging time along with relatively high resolution and minimised blurring artefacts, and minimised susceptibility artefacts and chemical shift artefacts. Bi-exponential T2 calculation is superior to mono-exponential in terms of statistical significance for the diagnosis of Achilles tendinopathy. • Magnetic resonance imaging offers new insight into healthy and diseased Achilles tendons • Bi-exponential T2 calculation in Achilles tendons is more beneficial than mono-exponential • A short T2 component correlates strongly with clinical score • Variable echo time sequences successfully used instead of ultrashort echo time sequences.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
NASA Astrophysics Data System (ADS)
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
2018-04-01
We propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons ( γ ∗, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impact parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
2018-04-27
Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less
NASA Astrophysics Data System (ADS)
Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.
2016-08-01
This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.
Umari, A.M.; Gorelick, S.M.
1986-01-01
It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)
Olsson, Martin A; Söderhjelm, Pär; Ryde, Ulf
2016-06-30
In this article, the convergence of quantum mechanical (QM) free-energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa-acid deep-cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158-224 atoms). We use single-step exponential averaging (ssEA) and the non-Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi-empirical PM6-DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free-energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Olsson, Martin A.; Söderhjelm, Pär
2016-01-01
In this article, the convergence of quantum mechanical (QM) free‐energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa‐acid deep‐cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158–224 atoms). We use single‐step exponential averaging (ssEA) and the non‐Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi‐empirical PM6‐DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free‐energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27117350
One-dimensional backreacting holographic superconductors with exponential nonlinear electrodynamics
NASA Astrophysics Data System (ADS)
Ghotbabadi, B. Binaei; Zangeneh, M. Kord; Sheykhi, A.
2018-05-01
In this paper, we investigate the effects of nonlinear exponential electrodynamics as well as backreaction on the properties of one-dimensional s-wave holographic superconductors. We continue our study both analytically and numerically. In analytical study, we employ the Sturm-Liouville method while in numerical approach we perform the shooting method. We obtain a relation between the critical temperature and chemical potential analytically. Our results show a good agreement between analytical and numerical methods. We observe that the increase in the strength of both nonlinearity and backreaction parameters causes the formation of condensation in the black hole background harder and critical temperature lower. These results are consistent with those obtained for two dimensional s-wave holographic superconductors.
A spatial scan statistic for survival data based on Weibull distribution.
Bhatt, Vijaya; Tiwari, Neeraj
2014-05-20
The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cao, Jinde; Song, Qiankun
2006-07-01
In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
Bravyi-Kitaev Superfast simulation of electronic structure on a quantum computer.
Setia, Kanav; Whitfield, James D
2018-04-28
Present quantum computers often work with distinguishable qubits as their computational units. In order to simulate indistinguishable fermionic particles, it is first required to map the fermionic state to the state of the qubits. The Bravyi-Kitaev Superfast (BKSF) algorithm can be used to accomplish this mapping. The BKSF mapping has connections to quantum error correction and opens the door to new ways of understanding fermionic simulation in a topological context. Here, we present the first detailed exposition of the BKSF algorithm for molecular simulation. We provide the BKSF transformed qubit operators and report on our implementation of the BKSF fermion-to-qubits transform in OpenFermion. In this initial study of a hydrogen molecule we have compared BKSF, Jordan-Wigner, and Bravyi-Kitaev transforms under the Trotter approximation. The gate count to implement BKSF is lower than Jordan-Wigner but higher than Bravyi-Kitaev. We considered different orderings of the exponentiated terms and found lower Trotter errors than the previously reported for Jordan-Wigner and Bravyi-Kitaev algorithms. These results open the door to the further study of the BKSF algorithm for quantum simulation.
Weigert, Claudia; Steffler, Fabian; Kurz, Tomas; Shellhammer, Thomas H.; Methner, Frank-Jürgen
2009-01-01
The measurement of yeast's intracellular pH (ICP) is a proven method for determining yeast vitality. Vitality describes the condition or health of viable cells as opposed to viability, which defines living versus dead cells. In contrast to fluorescence photometric measurements, which show only average ICP values of a population, flow cytometry allows the presentation of an ICP distribution. By examining six repeated propagations with three separate growth phases (lag, exponential, and stationary), the ICP method previously established for photometry was transferred successfully to flow cytometry by using the pH-dependent fluorescent probe 5,6-carboxyfluorescein. The correlation between the two methods was good (r2 = 0.898, n = 18). With both methods it is possible to track the course of growth phases. Although photometry did not yield significant differences between exponentially and stationary phases (P = 0.433), ICP via flow cytometry did (P = 0.012). Yeast in an exponential phase has a unimodal ICP distribution, reflective of a homogeneous population; however, yeast in a stationary phase displays a broader ICP distribution, and subpopulations could be defined by using the flow cytometry method. In conclusion, flow cytometry yielded specific evidence of the heterogeneity in vitality of a yeast population as measured via ICP. In contrast to photometry, flow cytometry increases information about the yeast population's vitality via a short measurement, which is suitable for routine analysis. PMID:19581482
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Eric L.; Orsat, Valerie; Shah, Manesh B
2012-01-01
System biology and bioprocess technology can be better understood using shotgun proteomics as a monitoring system during the fermentation. We demonstrated a shotgun proteomic method to monitor the temporal yeast proteome in early, middle and late exponential phases. Our study identified a total of 1389 proteins combining all 2D-LC-MS/MS runs. The temporal Saccharomyces cerevisiae proteome was enriched with proteolysis, radical detoxification, translation, one-carbon metabolism, glycolysis and TCA cycle. Heat shock proteins and proteins associated with oxidative stress response were found throughout the exponential phase. The most abundant proteins observed were translation elongation factors, ribosomal proteins, chaperones and glycolytic enzymes. Themore » high abundance of the H-protein of the glycine decarboxylase complex (Gcv3p) indicated the availability of glycine in the environment. We observed differentially expressed proteins and the induced proteins at mid-exponential phase were involved in ribosome biogenesis, mitochondria DNA binding/replication and transcriptional activator. Induction of tryptophan synthase (Trp5p) indicated the abundance of tryptophan during the fermentation. As fermentation progressed toward late exponential phase, a decrease in cell proliferation was implied from the repression of ribosomal proteins, transcription coactivators, methionine aminopeptidase and translation-associated proteins.« less
Rapid cell death in Xanthomonas campestris pv. glycines.
Gautam, Satyendra; Sharma, Arun
2002-04-01
Xanthomonas campestris pv. glycines strain AM2 (XcgAM2), the etiological agent of bacterial pustule disease of soybean, exhibited post-exponential rapid cell death (RCD) in LB medium. X. campestris pv. malvacearum NCIM 2310 and X. campestris NCIM 2961 also displayed RCD, though less pronouncedly than XcgAM2. RCD was not observed in Pseudomonas syringae pv. glycines, or Escherichia coli DH5alpha. Incubation of the post-exponential LB-grown XcgAM2 cultures at 4 degrees C arrested the RCD. RCD was also inhibited by the addition of starch during the exponential phase of LB-growing XcgAM2. Protease negative mutants of XcgAM2 were found to be devoid of RCD behavior observed in the wild type XcgAM2. While undergoing RCD, the organism was found to transform to spherical membrane bodies. The presence of membrane bodies was confirmed by using a membrane specific fluorescent label, 1,6-diphenyl 1,3,5-hexatriene (DPH), and also by visualizing these structures under microscope. The membrane bodies of XcgAM2 were found to contain DNA, which was devoid of the indigenous plasmids of the organism. The membrane bodies were found to bind annexin V indicative of the externalization of membrane phosphatidyl serine. Nicking of DNA in XcgAM2 cultures undergoing RCD in LB medium was also detected using a TUNEL assay. The RCD in XcgAM2 appeared to have features similar to the programmed cell death in eukaryotes.
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
NASA Astrophysics Data System (ADS)
Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai
2017-10-01
A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.
The digital transformation of health care.
Coile, R C
2000-01-01
The arrival of the Internet offers the opportunity to fundamentally reinvent medicine and health care delivery. The "e-health" era is nothing less than the digital transformation of the practice of medicine, as well as the business side of the health industry. Health care is only now arriving in the "Information Economy." The Internet is the next frontier of health care. Health care consumers are flooding into cyberspace, and an Internet-based industry of health information providers is springing up to serve them. Internet technology may rank with antibiotics, genetics, and computers as among the most important changes for medical care delivery. Utilizing e-health strategies will expand exponentially in the next five years, as America's health care executives shift to applying IS/IT (information systems/information technology) to the fundamental business and clinical processes of the health care enterprise. Internet-savvy physician executives will provide a bridge between medicine and management in the adoption of e-health technology.
A generalized non-Gaussian consistency relation for single field inflation
NASA Astrophysics Data System (ADS)
Bravo, Rafael; Mooij, Sander; Palma, Gonzalo A.; Pradenas, Bastián
2018-05-01
We show that a perturbed inflationary spacetime, driven by a canonical single scalar field, is invariant under a special class of coordinate transformations together with a field reparametrization of the curvature perturbation in co-moving gauge. This transformation may be used to derive the squeezed limit of the 3-point correlation function of the co-moving curvature perturbations valid in the case that these do not freeze after horizon crossing. This leads to a generalized version of Maldacena's non-Gaussian consistency relation in the sense that the bispectrum squeezed limit is completely determined by spacetime diffeomorphisms. Just as in the case of the standard consistency relation, this result may be understood as the consequence of how long-wavelength modes modulate those of shorter wavelengths. This relation allows one to derive the well known violation to the consistency relation encountered in ultra slow-roll, where curvature perturbations grow exponentially after horizon crossing.
NASA Astrophysics Data System (ADS)
Imran, M. A.; Riaz, M. B.; Shah, N. A.; Zafar, A. A.
2018-03-01
The aim of this article is to investigate the unsteady natural convection flow of Maxwell fluid with fractional derivative over an exponentially accelerated infinite vertical plate. Moreover, slip condition, radiation, MHD and Newtonian heating effects are also considered. A modern definition of fractional derivative operator recently introduced by Caputo and Fabrizio has been used to formulate the fractional model. Semi analytical solutions of the dimensionless problem are obtained by employing Stehfest's and Tzou's algorithms in order to find the inverse Laplace transforms for temperature and velocity fields. Temperature and rate of heat transfer for non-integer and integer order derivatives are computed and reduced to some known solutions from the literature. Finally, in order to get insight of the physical significance of the considered problem regarding velocity and Nusselt number, some graphical illustrations are made using Mathcad software. As a result, in comparison between Maxwell and viscous fluid (fractional and ordinary) we found that viscous (fractional and ordinary) fluids are swiftest than Maxwell (fractional and ordinary) fluids.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
Improved result on stability analysis of discrete stochastic neural networks with time delay
NASA Astrophysics Data System (ADS)
Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng
2009-04-01
This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Partitioning of monomethylmercury between freshwater algae and water.
Miles, C J; Moye, H A; Phlips, E J; Sargent, B
2001-11-01
Phytoplankton-water monomethylmercury (MeHg) partition constants (KpI) have been determined in the laboratory for two green algae Selenastrum capricornutum and Cosmarium botrytis, the blue-green algae Schizothrix calcicola, and the diatom Thallasiosira spp., algal species that are commonly found in natural surface waters. Two methods were used to determine KpI, the Freundlich isotherm method and the flow-through/dialysis bag method. Both methods yielded KpI values of about 10(6.6) for S. capricornutum and were not significantly different. The KpI for the four algae studied were similar except for Schizothrix, which was significantly lower than S. capricornutum. The KpI for MeHg and S. capricornutum (exponential growth) was not significantly different in systems with predominantly MeHgOH or MeHgCl species. This is consistent with other studies that show metal speciation controls uptake kinetics, but the reactivity with intracellular components controls steady-state concentrations. Partitioning constants determined with exponential and stationary phase S. capricornutum cells at the same conditions were not significantly different, while the partitioning constant for exponential phase, phosphorus-limited cells was significantly lower, suggesting that P-limitation alters the ecophysiology of S. capricornutum sufficiently to impact partitioning, which may then ultimately affect mercury levels in higher trophic species.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements
NASA Astrophysics Data System (ADS)
Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.
2014-11-01
We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.
New exponential stability criteria for stochastic BAM neural networks with impulses
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Samidurai, R.; Anthoni, S. M.
2010-10-01
In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Itô differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
Using neural networks to represent potential surfaces as sums of products.
Manzhos, Sergei; Carrington, Tucker
2006-11-21
By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.
A nonperturbative light-front coupled-cluster method
NASA Astrophysics Data System (ADS)
Hiller, J. R.
2012-10-01
The nonperturbative Hamiltonian eigenvalue problem for bound states of a quantum field theory is formulated in terms of Dirac's light-front coordinates and then approximated by the exponential-operator technique of the many-body coupled-cluster method. This approximation eliminates any need for the usual approximation of Fock-space truncation. Instead, the exponentiated operator is truncated, and the terms retained are determined by a set of nonlinear integral equations. These equations are solved simultaneously with an effective eigenvalue problem in the valence sector, where the number of constituents is small. Matrix elements can be calculated, with extensions of techniques from standard coupled-cluster theory, to obtain form factors and other observables.
Holder, J P; Benedetti, L R; Bradley, D K
2016-11-01
Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.
Characterization of aqueous interactions of copper-doped phosphate-based glasses by vapour sorption.
Stähli, Christoph; Shah Mohammadi, Maziar; Waters, Kristian E; Nazhat, Showan N
2014-07-01
Owing to their adjustable dissolution properties, phosphate-based glasses (PGs) are promising materials for the controlled release of bioinorganics, such as copper ions. This study describes a vapour sorption method that allowed for the investigation of the kinetics and mechanisms of aqueous interactions of PGs of the formulation 50P2O5-30CaO-(20-x)Na2O-xCuO (x=0, 1, 5 and 10mol.%). Initial characterization was performed using (31)P magic angle spinning nuclear magnetic resonance and attenuated total reflectance-Fourier transform infrared spectroscopy. Increasing CuO content resulted in chemical shifts of the predominant Q(2) NMR peak and of the (POP)as and (PO(-)) Fourier transform infrared absorptions, owing to the higher strength of the POCu bond compared to PONa. Vapour sorption and desorption were gravimetrically measured in PG powders exposed to variable relative humidity (RH). Sorption was negligible below 70% RH and increased exponentially with RH from 70 to 90%, where it exhibited a negative correlation with CuO content. Vapour sorption in 0% and 1% CuO glasses resulted in phosphate chain hydration and hydrolysis, as evidenced by protonated Q(0)(1H) and Q(1)(1H) species. Dissolution rates in deionized water showed a linear correlation (R(2)>0.99) with vapour sorption. Furthermore, cation release rates could be predicted based on dissolution rates and PG composition. The release of orthophosphate and short polyphosphate species corroborates the action of hydrolysis and was correlated with pH changes. In conclusion, the agreement between vapour sorption and routine characterization techniques in water demonstrates the potential of this method for the study of PG aqueous reactions. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Local yield stress statistics in model amorphous solids
NASA Astrophysics Data System (ADS)
Barbot, Armand; Lerbinger, Matthias; Hernandez-Garcia, Anier; García-García, Reinaldo; Falk, Michael L.; Vandembroucq, Damien; Patinet, Sylvain
2018-03-01
We develop and extend a method presented by Patinet, Vandembroucq, and Falk [Phys. Rev. Lett. 117, 045501 (2016), 10.1103/PhysRevLett.117.045501] to compute the local yield stresses at the atomic scale in model two-dimensional Lennard-Jones glasses produced via differing quench protocols. This technique allows us to sample the plastic rearrangements in a nonperturbative manner for different loading directions on a well-controlled length scale. Plastic activity upon shearing correlates strongly with the locations of low yield stresses in the quenched states. This correlation is higher in more structurally relaxed systems. The distribution of local yield stresses is also shown to strongly depend on the quench protocol: the more relaxed the glass, the higher the local plastic thresholds. Analysis of the magnitude of local plastic relaxations reveals that stress drops follow exponential distributions, justifying the hypothesis of an average characteristic amplitude often conjectured in mesoscopic or continuum models. The amplitude of the local plastic rearrangements increases on average with the yield stress, regardless of the system preparation. The local yield stress varies with the shear orientation tested and strongly correlates with the plastic rearrangement locations when the system is sheared correspondingly. It is thus argued that plastic rearrangements are the consequence of shear transformation zones encoded in the glass structure that possess weak slip planes along different orientations. Finally, we justify the length scale employed in this work and extract the yield threshold statistics as a function of the size of the probing zones. This method makes it possible to derive physically grounded models of plasticity for amorphous materials by directly revealing the relevant details of the shear transformation zones that mediate this process.
Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S
2008-04-11
A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker-Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes.
Le, Chi Chip; Wismer, Michael K; Shi, Zhi-Cai; Zhang, Rui; Conway, Donald V; Li, Guoqing; Vachal, Petr; Davies, Ian W; MacMillan, David W C
2017-06-28
Photocatalysis for organic synthesis has experienced an exponential growth in the past 10 years. However, the variety of experimental procedures that have been reported to perform photon-based catalyst excitation has hampered the establishment of general protocols to convert visible light into chemical energy. To address this issue, we have designed an integrated photoreactor for enhanced photon capture and catalyst excitation. Moreover, the evaluation of this new reactor in eight photocatalytic transformations that are widely employed in medicinal chemistry settings has confirmed significant performance advantages of this optimized design while enabling a standardized protocol.
Demonstration of a compiled version of Shor's quantum factoring algorithm using photonic qubits.
Lu, Chao-Yang; Browne, Daniel E; Yang, Tao; Pan, Jian-Wei
2007-12-21
We report an experimental demonstration of a complied version of Shor's algorithm using four photonic qubits. We choose the simplest instance of this algorithm, that is, factorization of N=15 in the case that the period r=2 and exploit a simplified linear optical network to coherently implement the quantum circuits of the modular exponential execution and semiclassical quantum Fourier transformation. During this computation, genuine multiparticle entanglement is observed which well supports its quantum nature. This experiment represents an essential step toward full realization of Shor's algorithm and scalable linear optics quantum computation.
Improving Quantum Gate Simulation using a GPU
NASA Astrophysics Data System (ADS)
Gutierrez, Eladio; Romero, Sergio; Trenas, Maria A.; Zapata, Emilio L.
2008-11-01
Due to the increasing computing power of the graphics processing units (GPU), they are becoming more and more popular when solving general purpose algorithms. As the simulation of quantum computers results on a problem with exponential complexity, it is advisable to perform a parallel computation, such as the one provided by the SIMD multiprocessors present in recent GPUs. In this paper, we focus on an important quantum algorithm, the quantum Fourier transform (QTF), in order to evaluate different parallelization strategies on a novel GPU architecture. Our implementation makes use of the new CUDA software/hardware architecture developed recently by NVIDIA.
Kothari, Shweta; Singh, Archana; Das, Utpalendu; Sarkar, Diptendra K; Datta, Chhanda; Hazra, Avijit
2017-01-01
Objective: To evaluate the role of exponential apparent diffusion coefficient (ADC) as a tool for differentiating benign and malignant breast lesions. Patients and Methods: This prospective observational study included 88 breast lesions in 77 patients (between 18 and 85 years of age) who underwent 3T breast magnetic resonance imaging (MRI) including diffusion-weighted imaging (DWI) using b-values of 0 and 800 s/mm2 before biopsy. Mean exponential ADC and ADC of benign and malignant lesions obtained from DWI were compared. Receiver operating characteristics (ROC) curve analysis was undertaken to identify any cut-off for exponential ADC and ADC to predict malignancy. P value of <0.05 was considered statistically significant. Histopathology was taken as the gold standard. Results: According to histopathology, 65 lesions were malignant and 23 were benign. The mean ADC and exponential ADC values of malignant lesions were 0.9526 ± 0.203 × 10−3 mm2/s and 0.4774 ± 0.071, respectively, and for benign lesions were 1.48 ± 0.4903 × 10−3 mm2/s and 0.317 ± 0.1152, respectively. For both the parameters, differences were highly significant (P < 0.001). Cut-off value of ≤0.0011 mm2/s (P < 0.0001) for ADC provided 92.3% sensitivity and 73.9% specificity, whereas with an exponential ADC cut-off value of >0.4 (P < 0.0001) for malignant lesions, 93.9% sensitivity and 82.6% specificity was obtained. The performance of ADC and exponential ADC in distinguishing benign and malignant breast lesions based on respective cut-offs was comparable (P = 0.109). Conclusion: Exponential ADC can be used as a quantitative adjunct tool for characterizing breast lesions with comparable sensitivity and specificity as that of ADC. PMID:28744085
Exponential asymptotics of homoclinic snaking
NASA Astrophysics Data System (ADS)
Dean, A. D.; Matthews, P. C.; Cox, S. M.; King, J. R.
2011-12-01
We study homoclinic snaking in the cubic-quintic Swift-Hohenberg equation (SHE) close to the onset of a subcritical pattern-forming instability. Application of the usual multiple-scales method produces a leading-order stationary front solution, connecting the trivial solution to the patterned state. A localized pattern may therefore be constructed by matching between two distant fronts placed back-to-back. However, the asymptotic expansion of the front is divergent, and hence should be truncated. By truncating optimally, such that the resultant remainder is exponentially small, an exponentially small parameter range is derived within which stationary fronts exist. This is shown to be a direct result of the 'locking' between the phase of the underlying pattern and its slowly varying envelope. The locking mechanism remains unobservable at any algebraic order, and can only be derived by explicitly considering beyond-all-orders effects in the tail of the asymptotic expansion, following the method of Kozyreff and Chapman as applied to the quadratic-cubic SHE (Chapman and Kozyreff 2009 Physica D 238 319-54, Kozyreff and Chapman 2006 Phys. Rev. Lett. 97 44502). Exponentially small, but exponentially growing, contributions appear in the tail of the expansion, which must be included when constructing localized patterns in order to reproduce the full snaking diagram. Implicit within the bifurcation equations is an analytical formula for the width of the snaking region. Due to the linear nature of the beyond-all-orders calculation, the bifurcation equations contain an analytically indeterminable constant, estimated in the previous work by Chapman and Kozyreff using a best fit approximation. A more accurate estimate of the equivalent constant in the cubic-quintic case is calculated from the iteration of a recurrence relation, and the subsequent analytical bifurcation diagram compared with numerical simulations, with good agreement.
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Ulrich, Alexander; Andersen, Kasper R.; Schwartz, Thomas U.
2012-01-01
We present a fast, reliable and inexpensive restriction-free cloning method for seamless DNA insertion into any plasmid without sequence limitation. Exponential megapriming PCR (EMP) cloning requires two consecutive PCR steps and can be carried out in one day. We show that EMP cloning has a higher efficiency than restriction-free (RF) cloning, especially for long inserts above 2.5 kb. EMP further enables simultaneous cloning of multiple inserts. PMID:23300917
Ulrich, Alexander; Andersen, Kasper R; Schwartz, Thomas U
2012-01-01
We present a fast, reliable and inexpensive restriction-free cloning method for seamless DNA insertion into any plasmid without sequence limitation. Exponential megapriming PCR (EMP) cloning requires two consecutive PCR steps and can be carried out in one day. We show that EMP cloning has a higher efficiency than restriction-free (RF) cloning, especially for long inserts above 2.5 kb. EMP further enables simultaneous cloning of multiple inserts.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, H.; Gao, M.; Wei, R.P.
1995-01-01
To better understand environmentally assisted crack growth (SCG) in yttria stabilized zirconia, experimental studies were undertaken to characterize the kinetics of crack growth and the associated stress/moisture induced phase transformation in ZrO[sub 2] + 3 mol% Y[sub 2]O[sub 3] (3Y-TZP) in water, dry nitrogen and toluene from 3 to 70 C. The results showed that crack growth in water depended strongly on stress intensity factor (K[sub 1]) and temperature (T) and involved the transformation of a thin layer of material near the crack tip from the tetragonal (t) to the monoclinic (m) phase. These results, combined with literature data onmore » moisture-induced phase transformation, suggested that crack growth enhancement by water is controlled by the rate of this transformation and reflects the environmental cracking susceptibility of the transformed m-phase. A model was developed to link subcritical crack growth (SCG) rate to the kinetics of t [yields] m phase transformation. The SCG rate is expressed as an exponential function of stress-free activation energy, a stress-dependent contribution in terms of the mode 1 stress intensity factor K[sub I] and actuation volume, and temperature. The stress-free activation energies for water and the inert environments were determined to be 82 [+-] 3 and 169 [+-] 4 kJ/mol, respectively, at the 95% confidence level, and the corresponding activation volumes were 14 and 35 unit cells. The decreases in activation energy and activation volume may be attributed to a change in surface energy by water.« less
NASA Astrophysics Data System (ADS)
Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.
2013-10-01
In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.
Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C
2017-06-01
The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.
Scalar-fluid interacting dark energy: Cosmological dynamics beyond the exponential potential
NASA Astrophysics Data System (ADS)
Dutta, Jibitesh; Khyllep, Wompherdeiki; Tamanini, Nicola
2017-01-01
We extend the dynamical systems analysis of scalar-fluid interacting dark energy models performed in C. G. Boehmer et al., Phys. Rev. D 91, 123002 (2015), 10.1103/PhysRevD.91.123002 by considering scalar field potentials beyond the exponential type. The properties and stability of critical points are examined using a combination of linear analysis, computational methods and advanced mathematical techniques, such as center manifold theory. We show that the interesting results obtained with an exponential potential can generally be recovered also for more complicated scalar field potentials. In particular, employing power law and hyperbolic potentials as examples, we find late time accelerated attractors, transitions from dark matter to dark energy domination with specific distinguishing features, and accelerated scaling solutions capable of solving the cosmic coincidence problem.
Evaluating an alternative method for rapid urinary creatinine determination
Creatinine (CR) is an endogenously-produced chemical routinely assayed in urine specimens to assess kidney function, sample dilution. The industry-standard method for CR determination, known as the kinetic Jaffe (KJ) method, relies on an exponential rate of a colorimetric change,...
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem; ...
2017-12-07
Recent advances in scanning transmission electron and scanning probe microscopies have opened unprecedented opportunities in probing the materials structural parameters and various functional properties in real space with an angstrom-level precision. This progress has been accompanied by exponential increase in the size and quality of datasets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large datasets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extracting informationmore » from atomically resolved images including location of the atomic species and type of defects. We develop a “weakly-supervised” approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular “rotor”. In conclusion, this deep learning based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem
Recent advances in scanning transmission electron and scanning probe microscopies have opened unprecedented opportunities in probing the materials structural parameters and various functional properties in real space with an angstrom-level precision. This progress has been accompanied by exponential increase in the size and quality of datasets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large datasets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extracting informationmore » from atomically resolved images including location of the atomic species and type of defects. We develop a “weakly-supervised” approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular “rotor”. In conclusion, this deep learning based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.« less
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem; Li, Xufan; Sang, Xiahan; Xiao, Kai; Unocic, Raymond R; Vasudevan, Rama; Jesse, Stephen; Kalinin, Sergei V
2017-12-26
Recent advances in scanning transmission electron and scanning probe microscopies have opened exciting opportunities in probing the materials structural parameters and various functional properties in real space with angstrom-level precision. This progress has been accompanied by an exponential increase in the size and quality of data sets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large data sets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extract information from atomically resolved images including location of the atomic species and type of defects. We develop a "weakly supervised" approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular "rotor". This deep learning-based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garderen, Noemie van; Clemens, Frank J.; Scharf, Dagobert
2010-05-30
Highly porous diatomite based granulates with a diameter of 500 mum have been produced by an extrusion method. In order to investigate the relation between microstructure, phase composition and attrition resistance of the final product, the granulates were sintered between 800 and 1300 deg. C. Mean pore size of the granulates was evaluated by Hg-porosimetry. An increase of the pore size is observed in the range of 3.6 nm to 40 mum with increasing sintering temperature. Higher mean pore radii of 1.6 mum and 5.7 mum obtained by sintering at 800 and 1300 deg. C respectively. X-ray diffraction shows thatmore » mullite phase appears at 1100 deg. C due to the presence of clay. At 1100 deg. C diatomite (amorphous silicate) started to transform into alpha-cristobalite. Attrition resistance was determined by evaluating the amount of ground material passed through a sieve with a predefined mesh size. It was observed that a material sintered at high temperature leads to an increase of attrition resistance due to the decrease of total porosities and phase transformation. Due to the reason that attrition resistance significantly increased by sintering the granulates at higher temperature, a so called attrition resistance index was determined in order to compare all the different attrition resistance values. This attrition resistance index was determined by using the exponential component of the equation obtained from attrition resistance curves. It permits comparison of the attrition behaviour without a time influence.« less
Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2018-03-01
Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.
Periodic bidirectional associative memory neural networks with distributed delays
NASA Astrophysics Data System (ADS)
Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde
2006-05-01
Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.
Global exponential stability of BAM neural networks with time-varying delays: The discrete-time case
NASA Astrophysics Data System (ADS)
Raja, R.; Marshal Anthoni, S.
2011-02-01
This paper deals with the problem of stability analysis for a class of discrete-time bidirectional associative memory (BAM) neural networks with time-varying delays. By employing the Lyapunov functional and linear matrix inequality (LMI) approach, a new sufficient conditions is proposed for the global exponential stability of discrete-time BAM neural networks. The proposed LMI based results can be easily checked by LMI control toolbox. Moreover, an example is also provided to demonstrate the effectiveness of the proposed method.
s-Ordered Exponential of Quadratic Forms Gained via IWSOP Technique
NASA Astrophysics Data System (ADS)
Bazrafkan, M. R.; Shähandeh, F.; Nahvifard, E.
2012-11-01
Using the generalized bar{s}-ordered Wigner operator, in which bar{s} is a vector over the field of complex numbers, the technique of integration within an s-ordered product of operators (IWSOP) has been extended to multimode case. We derive the bar{s}-ordered form of the widely applicable multimode exponential of quadratic form exp\\{sum_{i,j = 1}n ai^{dag}\\varLambda_{ij}{aj}\\} , each mode being in some particular order s i , applying this method.
NASA Astrophysics Data System (ADS)
Qin, Wei; Miranowicz, Adam; Li, Peng-Bo; Lü, Xin-You; You, J. Q.; Nori, Franco
2018-03-01
We propose an experimentally feasible method for enhancing the atom-field coupling as well as the ratio between this coupling and dissipation (i.e., cooperativity) in an optical cavity. It exploits optical parametric amplification to exponentially enhance the atom-cavity interaction and, hence, the cooperativity of the system, with the squeezing-induced noise being completely eliminated. Consequently, the atom-cavity system can be driven from the weak-coupling regime to the strong-coupling regime for modest squeezing parameters, and even can achieve an effective cooperativity much larger than 100. Based on this, we further demonstrate the generation of steady-state nearly maximal quantum entanglement. The resulting entanglement infidelity (which quantifies the deviation of the actual state from a maximally entangled state) is exponentially smaller than the lower bound on the infidelities obtained in other dissipative entanglement preparations without applying squeezing. In principle, we can make an arbitrarily small infidelity. Our generic method for enhancing atom-cavity interaction and cooperativities can be implemented in a wide range of physical systems, and it can provide diverse applications for quantum information processing.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
Rodríguez, M Carmen; Alegre, M Teresa; Martín, M Cruz; Mesas, Juan M
2015-01-01
A chimeric plasmid, pRS7Rep (6.1 kb), was constructed using the replication region of pRS7, a large plasmid from Oenococcus oeni, and pEM64, a plasmid derived from pIJ2925 and containing a gene for resistance to chloramphenicol. pRS7Rep is a shuttle vector that replicates in Escherichia coli using its pIJ2925 component and in lactic acid bacteria (LAB) using the replication region of pRS7. High levels of transformants per µg of DNA were obtained by electroporation of pRS7Rep into Pediococcus acidilactici (1.5 × 10(7)), Lactobacillus plantarum (5.7 × 10(5)), Lactobacillus casei (2.3 × 10(5)), Leuconostoc citreum (2.7 × 10(5)), and Enterococcus faecalis (2.4 × 10(5)). A preliminary optimisation of the technical conditions of electrotransformation showed that P. acidilactici and L. plantarum are better transformed at a later exponential phase of growth, whereas L. casei requires the early exponential phase for better electrotransformation efficiency. pRS7Rep contains single restriction sites useful for cloning purposes, BamHI, XbaI, SalI, HincII, SphI and PstI, and was maintained at an acceptable rate (>50%) over 100 generations without selective pressure in L. plantarum, but was less stable in L. casei and P. acidilactici. The ability of pRS7Rep to accept and express other genes was assessed. To the best of our knowledge, this is the first time that the replication region of a plasmid from O. oeni has been used to generate a cloning vector. Copyright © 2014 Elsevier Inc. All rights reserved.
Real-time modeling of primitive environments through wavelet sensors and Hebbian learning
NASA Astrophysics Data System (ADS)
Vaccaro, James M.; Yaworsky, Paul S.
1999-06-01
Modeling the world through sensory input necessarily provides a unique perspective for the observer. Given a limited perspective, objects and events cannot always be encoded precisely but must involve crude, quick approximations to deal with sensory information in a real- time manner. As an example, when avoiding an oncoming car, a pedestrian needs to identify the fact that a car is approaching before ascertaining the model or color of the vehicle. In our methodology, we use wavelet-based sensors with self-organized learning to encode basic sensory information in real-time. The wavelet-based sensors provide necessary transformations while a rank-based Hebbian learning scheme encodes a self-organized environment through translation, scale and orientation invariant sensors. Such a self-organized environment is made possible by combining wavelet sets which are orthonormal, log-scale with linear orientation and have automatically generated membership functions. In earlier work we used Gabor wavelet filters, rank-based Hebbian learning and an exponential modulation function to encode textural information from images. Many different types of modulation are possible, but based on biological findings the exponential modulation function provided a good approximation of first spike coding of `integrate and fire' neurons. These types of Hebbian encoding schemes (e.g., exponential modulation, etc.) are useful for quick response and learning, provide several advantages over contemporary neural network learning approaches, and have been found to quantize data nonlinearly. By combining wavelets with Hebbian learning we can provide a real-time front-end for modeling an intelligent process, such as the autonomous control of agents in a simulated environment.
On parametric Gevrey asymptotics for some nonlinear initial value Cauchy problems
NASA Astrophysics Data System (ADS)
Lastra, A.; Malek, S.
2015-11-01
We study a nonlinear initial value Cauchy problem depending upon a complex perturbation parameter ɛ with vanishing initial data at complex time t = 0 and whose coefficients depend analytically on (ɛ, t) near the origin in C2 and are bounded holomorphic on some horizontal strip in C w.r.t. the space variable. This problem is assumed to be non-Kowalevskian in time t, therefore analytic solutions at t = 0 cannot be expected in general. Nevertheless, we are able to construct a family of actual holomorphic solutions defined on a common bounded open sector with vertex at 0 in time and on the given strip above in space, when the complex parameter ɛ belongs to a suitably chosen set of open bounded sectors whose union form a covering of some neighborhood Ω of 0 in C*. These solutions are achieved by means of Laplace and Fourier inverse transforms of some common ɛ-depending function on C × R, analytic near the origin and with exponential growth on some unbounded sectors with appropriate bisecting directions in the first variable and exponential decay in the second, when the perturbation parameter belongs to Ω. Moreover, these solutions satisfy the remarkable property that the difference between any two of them is exponentially flat for some integer order w.r.t. ɛ. With the help of the classical Ramis-Sibuya theorem, we obtain the existence of a formal series (generally divergent) in ɛ which is the common Gevrey asymptotic expansion of the built up actual solutions considered above.
NASA Astrophysics Data System (ADS)
Zaraska, Leszek; Stępniowski, Wojciech J.; Jaskuła, Marian; Sulka, Grzegorz D.
2014-06-01
Anodic aluminum oxide (AAO) layers were formed by a simple two-step anodization in 0.3 M oxalic acid at relatively high temperatures (20-30 °C) and various anodizing potentials (30-65 V). The effect of anodizing conditions on structural features of as-obtained oxides was carefully investigated. A linear and exponential relationships between cell diameter, pore density and anodizing potential were confirmed, respectively. On the other hand, no effect of temperature and duration of anodization on pore spacing and pore density was found. Detailed quantitative and qualitative analyses of hexagonal arrangement of nanopore arrays were performed for all studied samples. The nanopore arrangement was evaluated using various methods based on the fast Fourier transform (FFT) images, Delaunay triangulations (defect maps), pair distribution functions (PDF), and angular distribution functions (ADF). It was found that for short anodizations performed at relatively high temperatures, the optimal anodizing potential that results in formation of nanostructures with the highest degree of pore order is 45 V. No direct effect of temperature and time of anodization on the nanopore arrangement was observed.
A singularity free approach to post glacial rebound calculations
NASA Technical Reports Server (NTRS)
Fang, Ming; Hager, Bradford H.
1994-01-01
Calculating the post glacial response of a viscoelastic Earth model using the exponential decay normal mode technique leads to intrinsic singularities if viscosity varies continuously as a function of radius. We develop a numerical implementation of the Complex Real Fourier transform (CRFT) method as an accurate and stable procedure to avoid these singularities. Using CRFT, we investigate the response of a set of Maxwell Earth models to surface loading. We find that the effect of expanding a layered viscosity structure into a continuously varying structure is to destroy the modes associated with the boundary between layers. Horizontal motion is more sensitive than vertical motion to the viscosity structure just below the lithosphere. Horizontal motion is less sensitive to the viscosity of the lower mantle than the vertical motion is. When the viscosity increases at 670 km depth by a factor of about 60, the response of the lower mantle is close to its elastic limit. Any further increase of the viscosity contrast at 670 km depth or further increase of viscosity as a continuous function of depth starting from 670 km depth is unlikely to be resolved.
Wilson loops on Riemann surfaces, Liouville theory and covariantization of the conformal group
NASA Astrophysics Data System (ADS)
Matone, Marco; Pasti, Paolo
2015-06-01
The covariantization procedure is usually referred to the translation operator, that is the derivative. Here we introduce a general method to covariantize arbitrary differential operators, such as the ones defining the fundamental group of a given manifold. We focus on the differential operators representing the sl2(ℝ) generators, which in turn, generate, by exponentiation, the two-dimensional conformal transformations. A key point of our construction is the recent result on the closed forms of the Baker-Campbell-Hausdorff formula. In particular, our covariantization receipt is quite general. This has a deep consequence since it means that the covariantization of the conformal group is always definite. Our covariantization receipt is quite general and apply in general situations, including AdS/CFT. Here we focus on the projective unitary representations of the fundamental group of a Riemann surface, which may include elliptic points and punctures, introduced in the framework of noncommutative Riemann surfaces. It turns out that the covariantized conformal operators are built in terms of Wilson loops around Poincaré geodesics, implying a deep relationship between gauge theories on Riemann surfaces and Liouville theory.
NASA Astrophysics Data System (ADS)
Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen
2017-04-01
In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.
Learning from ISS-modular adaptive NN control of nonlinear strict-feedback systems.
Wang, Cong; Wang, Min; Liu, Tengfei; Hill, David J
2012-10-01
This paper studies learning from adaptive neural control (ANC) for a class of nonlinear strict-feedback systems with unknown affine terms. To achieve the purpose of learning, a simple input-to-state stability (ISS) modular ANC method is first presented to ensure the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in finite time. Subsequently, it is proven that learning with the proposed stable ISS-modular ANC can be achieved. The cascade structure and unknown affine terms of the considered systems make it very difficult to achieve learning using existing methods. To overcome these difficulties, the stable closed-loop system in the control process is decomposed into a series of linear time-varying (LTV) perturbed subsystems with the appropriate state transformation. Using a recursive design, the partial persistent excitation condition for the radial basis function neural network (NN) is established, which guarantees exponential stability of LTV perturbed subsystems. Consequently, accurate approximation of the closed-loop system dynamics is achieved in a local region along recurrent orbits of closed-loop signals, and learning is implemented during a closed-loop feedback control process. The learned knowledge is reused to achieve stability and an improved performance, thereby avoiding the tremendous repeated training process of NNs. Simulation studies are given to demonstrate the effectiveness of the proposed method.
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
Logarithmic Compression of Sensory Signals within the Dendritic Tree of a Collision-Sensitive Neuron
2012-01-01
Neurons in a variety of species, both vertebrate and invertebrate, encode the kinematics of objects approaching on a collision course through a time-varying firing rate profile that initially increases, then peaks, and eventually decays as collision becomes imminent. In this temporal profile, the peak firing rate signals when the approaching object's subtended size reaches an angular threshold, an event which has been related to the timing of escape behaviors. In a locust neuron called the lobula giant motion detector (LGMD), the biophysical basis of this angular threshold computation relies on a multiplicative combination of the object's angular size and speed, achieved through a logarithmic-exponential transform. To understand how this transform is implemented, we modeled the encoding of angular velocity along the pathway leading to the LGMD based on the experimentally determined activation pattern of its presynaptic neurons. These simulations show that the logarithmic transform of angular speed occurs between the synaptic conductances activated by the approaching object onto the LGMD's dendritic tree and its membrane potential at the spike initiation zone. Thus, we demonstrate an example of how a single neuron's dendritic tree implements a mathematical step in a neural computation important for natural behavior. PMID:22492048
Operational modal analysis applied to the concert harp
NASA Astrophysics Data System (ADS)
Chomette, B.; Le Carrou, J.-L.
2015-05-01
Operational modal analysis (OMA) methods are useful to extract modal parameters of operating systems. These methods seem to be particularly interesting to investigate the modal basis of string instruments during operation to avoid certain disadvantages due to conventional methods. However, the excitation in the case of string instruments is not optimal for OMA due to the presence of damped harmonic components and low noise in the disturbance signal. Therefore, the present study investigates the least-square complex exponential (LSCE) and the modified least-square complex exponential methods in the case of a string instrument to identify modal parameters of the instrument when it is played. The efficiency of the approach is experimentally demonstrated on a concert harp excited by some of its strings and the two methods are compared to a conventional modal analysis. The results show that OMA allows us to identify modes particularly present in the instrument's response with a good estimation especially if they are close to the excitation frequency with the modified LSCE method.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
A statistical study of decaying kink oscillations detected using SDO/AIA
NASA Astrophysics Data System (ADS)
Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.
2016-01-01
Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.
A method for the estimation of dual transmissivities from slug tests
NASA Astrophysics Data System (ADS)
Wolny, Filip; Marciniak, Marek; Kaczmarek, Mariusz
2018-03-01
Aquifer homogeneity is usually assumed when interpreting the results of pumping and slug tests, although aquifers are essentially heterogeneous. The aim of this study is to present a method of determining the transmissivities of dual-permeability water-bearing formations based on slug tests such as the pressure-induced permeability test. A bi-exponential rate-of-rise curve is typically observed during many of these tests conducted in heterogeneous formations. The work involved analyzing curves deviating from the exponential rise recorded at the Belchatow Lignite Mine in central Poland, where a significant number of permeability tests have been conducted. In most cases, bi-exponential movement was observed in piezometers with a screen installed in layered sediments, each with a different hydraulic conductivity, or in fissured rock. The possibility to identify the flow properties of these geological formations was analyzed. For each piezometer installed in such formations, a set of two transmissivity values was calculated piecewise based on the interpretation algorithm of the pressure-induced permeability test—one value for the first (steeper) part of the obtained rate-of-rise curve, and a second value for the latter part of the curve. The results of transmissivity estimation for each piezometer are shown. The discussion presents the limitations of the interpretational method and suggests future modeling plans.
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
NASA Astrophysics Data System (ADS)
Suhardiman, A.; Tampubolon, B. A.; Sumaryono, M.
2018-04-01
Many studies revealed significant correlation between satellite image properties and forest data attributes such as stand volume, biomass or carbon stock. However, further study is still relevant due to advancement of remote sensing technology as well as improvement on methods of data analysis. In this study, the properties of three vegetation indices derived from Landsat 8 OLI were tested upon above-ground carbon stock data from 50 circular sample plots (30-meter radius) from ground survey in PT. Inhutani I forest concession in Labanan, Berau, East Kalimantan. Correlation analysis using Pearson method exhibited a promising results when the coefficient of correlation (r-value) was higher than 0.5. Further regression analysis was carried out to develop mathematical model describing the correlation between sample plots data and vegetation index image using various mathematical models.Power and exponential model were demonstrated a good result for all vegetation indices. In order to choose the most adequate mathematical model for predicting Above-ground Carbon (AGC), the Bayesian Information Criterion (BIC) was applied. The lowest BIC value (i.e. -376.41) shown by Transformed Vegetation Index (TVI) indicates this formula, AGC = 9.608*TVI21.54, is the best predictor of AGC of study area.
Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots
NASA Astrophysics Data System (ADS)
WANG, Wei; WANG, Lei; YUN, Chao
2017-03-01
Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.
In vivo chlorine and sodium MRI of rat brain at 21.1 T
Elumalai, Malathy; Kitchen, Jason A.; Qian, Chunqi; Gor’kov, Peter L.; Brey, William W.
2017-01-01
Object MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. Materials and methods MRI of 35Cl and 23Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. Results T1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T1a = 4.8 ms (0.7) T1b = 24.4 ± 7 ms (0.3) and compared with sodium (T1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of T2a∗=0.4 ms and T2a∗=0.53 ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. Conclusion The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression. PMID:23748497
Obuchowska, Agnes
2008-03-01
A new electrochemical method for the quantitation of bacteria that is rapid, inexpensive, and amenable to miniaturization is reported. Cyclic voltammetry was used to quantitate M. luteus, C. sporogenes, and E. coli JM105 in exponential and stationary phases, following exposure of screen-printed carbon working electrodes (SPCEs) to lysed culture samples. Ferricyanide was used as a probe. The detection limits (3s) were calculated and the dynamic ranges for E. coli (exponential and stationary phases), M. luteus (exponential and stationary phases), and C. sporogenes (exponential phase) lysed by lysozyme were 3 x 10(4) to 5 x 10(6) colony-forming units (CFU) mL(-1), 5 x 10(6) to 2 x 10(8) CFU mL(-1) and 3 x 10(3) to 3 x 10(5) CFU mL(-1), respectively. Good overlap was obtained between the calibration curves when the electrochemical signal was plotted against the dry bacterial weight, or between the protein concentration in the bacterial lysate. In contrast, unlysed bacteria did not change the electrochemical signal of ferricyanide. The results indicate that the reduction of the electrochemical signal in the presence of the lysate is mainly due to the fouling of the electrode by proteins. Similar results were obtained with carbon-paste electrodes although detection limits were better with SPCEs. The method described herein was applied to quantitation of bacteria in a cooling tower water sample.
Popa, Radu; Cimpoiasu, Vily M
2013-05-01
Properties of avenues of transformation and their mutualism with forms of organization in dynamic systems are essential for understanding the evolution of prebiotic order. We have analyzed competition between two avenues of transformation in an A↔B system, using the simulation approach called BiADA (Biotic Abstract Dual Automata). We discuss means of avoiding common pitfalls of abstract system modeling and benefits of BiADA-based simulations. We describe the effect of the availability of free energy, energy sink magnitude, and autocatalysis on the evolution of energy flux and order in the system. Results indicate that prebiotic competition between avenues of transformation was more stringent in energy-limited environments. We predict that in such conditions the efficiency of autocatalysis during competition between alternative system states will increase for systems with forms of organization having short half-lives and thus information that is time-sensitive to energy starvation. Our results also offer a potential solution to Manfred Eigen's error catastrophe dilemma. In the conditions discussed above, the exponential growth of quasi species is curbed through the removal of less competitive "genetic" variants via energy starvation. We propose that one of the most important achievements (and selective edges) of a dynamic network during competition in energy-limited or energy-variable environments was the capacity to correlate the internal energy flux and the need for free energy with the availability of free energy in the environment.
Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong
2018-08-01
This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
MIP models for connected facility location: A theoretical and computational study☆
Gollowitzer, Stefan; Ljubić, Ivana
2011-01-01
This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasileiadis, Thomas; Department of Materials Science, University of Patras, GR-26504 Rio-Patras; Yannopoulos, Spyros N., E-mail: sny@iceht.forth.gr
Controlled photo-induced oxidation and amorphization of elemental trigonal tellurium are achieved by laser irradiation at optical wavelengths. These processes are monitored in situ by time-resolved Raman scattering and ex situ by electron microscopies. Ultrathin TeO₂ films form on Te surfaces, as a result of irradiation, with an interface layer of amorphous Te intervening between them. It is shown that irradiation, apart from enabling the controllable transformation of bulk Te to one-dimensional nanostructures, such as Te nanotubes and hybrid core-Te/sheath-TeO₂ nanowires, causes also a series of light-driven (athermal) phase transitions involving the crystallization of the amorphous TeO₂ layers and its transformationmore » to a multiplicity of crystalline phases including the γ-, β-, and α-TeO₂ crystalline phases. The kinetics of the above photo-induced processes is investigated by Raman scattering at various laser fluences revealing exponential and non-exponential kinetics at low and high fluence, respectively. In addition, the formation of ultrathin (less than 10 nm) layers of amorphous TeO₂ offers the possibility to explore structural transitions in 2D glasses by observing changes in the short- and medium-range structural order induced by spatial confinement.« less
Baccus-Taylor, G S H; Falloon, O C; Henry, N
2015-06-01
(i) To study the effects of cold shock on Escherichia coli O157:H7 cells. (ii) To determine if cold-shocked E. coli O157:H7 cells at stationary and exponential phases are more pressure-resistant than their non-cold-shocked counterparts. (iii) To investigate the baro-protective role of growth media (0·1% peptone water, beef gravy and ground beef). Quantitative estimates of lethality and sublethal injury were made using the differential plating method. There were no significant differences (P > 0·05) in the number of cells killed; cold-shocked or non-cold-shocked. Cells grown in ground beef (stationary and exponential phases) experienced lowest death compared with peptone water and beef gravy. Cold-shock treatment increased the sublethal injury to cells cultured in peptone water (stationary and exponential phases) and ground beef (exponential phase), but decreased the sublethal injury to cells in beef gravy (stationary phase). Cold shock did not confer greater resistance to stationary or exponential phase cells pressurized in peptone water, beef gravy or ground beef. Ground beef had the greatest baro-protective effect. Real food systems should be used in establishing food safety parameters for high-pressure treatments; micro-organisms are less resistant in model food systems, the use of which may underestimate the organisms' resistance. © 2015 The Society for Applied Microbiology.
Scaling Laws for the Multidimensional Burgers Equation with Quadratic External Potential
NASA Astrophysics Data System (ADS)
Leonenko, N. N.; Ruiz-Medina, M. D.
2006-07-01
The reordering of the multidimensional exponential quadratic operator in coordinate-momentum space (see X. Wang, C.H. Oh and L.C. Kwek (1998). J. Phys. A.: Math. Gen. 31:4329-4336) is applied to derive an explicit formulation of the solution to the multidimensional heat equation with quadratic external potential and random initial conditions. The solution to the multidimensional Burgers equation with quadratic external potential under Gaussian strongly dependent scenarios is also obtained via the Hopf-Cole transformation. The limiting distributions of scaling solutions to the multidimensional heat and Burgers equations with quadratic external potential are then obtained under such scenarios.
Site‐Selective Disulfide Modification of Proteins: Expanding Diversity beyond the Proteome
Kuan, Seah Ling; Wang, Tao
2016-01-01
Abstract The synthetic transformation of polypeptides with molecular accuracy holds great promise for providing functional and structural diversity beyond the proteome. Consequently, the last decade has seen an exponential growth of site‐directed chemistry to install additional features into peptides and proteins even inside living cells. The disulfide rebridging strategy has emerged as a powerful tool for site‐selective modifications since most proteins contain disulfide bonds. In this Review, we present the chemical design, advantages and limitations of the disulfide rebridging reagents, while summarizing their relevance for synthetic customization of functional protein bioconjugates, as well as the resultant impact and advancement for biomedical applications. PMID:27778400
Control of the electromagnetic drag using fluctuating light fields
NASA Astrophysics Data System (ADS)
Pastor, Víctor J. López; Marqués, Manuel I.
2018-05-01
An expression for the electromagnetic drag force experienced by an electric dipole in a light field consisting of a monochromatic plane wave with polarization and phase randomly fluctuating is obtained. The expression explicitly considers the transformations of the field and frequency due to the Doppler shift and the change of the polarizability response of the electric dipole. The conditions to be fulfilled by the polarizability of the dipole in order to obtain a positive, a null, and a negative drag coefficient are analytically determined and checked against numerical simulations for the dynamics of a silver nanoparticle. The theoretically predicted diffusive, superdiffusive, and exponentially accelerated dynamical regimes are numerically confirmed.
A scaling theory for linear systems
NASA Technical Reports Server (NTRS)
Brockett, R. W.; Krishnaprasad, P. S.
1980-01-01
A theory of scaling for rational (transfer) functions in terms of transformation groups is developed. Two different four-parameter scaling groups which play natural roles in studying linear systems are identified and the effect of scaling on Fisher information and related statistical measures in system identification are studied. The scalings considered include change of time scale, feedback, exponential scaling, magnitude scaling, etc. The scaling action of the groups studied is tied to the geometry of transfer functions in a rather strong way as becomes apparent in the examination of the invariants of scaling. As a result, the scaling process also provides new insight into the parameterization question for rational functions.
Deformation processed Al/Ca nano-filamentary composite conductors for HVDC applications
NASA Astrophysics Data System (ADS)
Czahor, C. F.; Anderson, I. E.; Riedemann, T. M.; Russell, A. M.
2017-07-01
Efficient long-distance power transmission is necessary as the world continues to implement renewable energy sources, often sited in remote areas. Light, strong, high-conductivity materials are desirable for this application to reduce both construction and operational costs. In this study an Al/Ca (11.5% vol.) composite with nano-filamentary reinforcement was produced by powder metallurgy then extruded, swaged, and wire drawn to a maximum true strain of 12.7. The tensile strength increased exponentially as the filament size was reduced to the sub-micron level. In an effort to improve the conductor’s ability to operate at elevated temperatures, the deformation-processed wires were heat-treated at 260°C to transform the Ca-reinforcing filaments to Al2Ca. Such a transformation raised the tensile strength by as much as 28%, and caused little change in ductility, while the electrical conductivity was reduced by only 1% to 3%. Al/Al2Ca composites are compared to existing conductor materials to show how implementation could affect installation and performance.
Integral representations of solutions of the wave equation based on relativistic wavelets
NASA Astrophysics Data System (ADS)
Perel, Maria; Gorodnitskiy, Evgeny
2012-09-01
A representation of solutions of the wave equation with two spatial coordinates in terms of localized elementary ones is presented. Elementary solutions are constructed from four solutions with the help of transformations of the affine Poincaré group, i.e. with the help of translations, dilations in space and time and Lorentz transformations. The representation can be interpreted in terms of the initial-boundary value problem for the wave equation in a half-plane. It gives the solution as an integral representation of two types of solutions: propagating localized solutions running away from the boundary under different angles and packet-like surface waves running along the boundary and exponentially decreasing away from the boundary. Properties of elementary solutions are discussed. A numerical investigation of coefficients of the decomposition is carried out. An example of the decomposition of the field created by sources moving along a line with different speeds is considered, and the dependence of coefficients on speeds of sources is discussed.
Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements
NASA Astrophysics Data System (ADS)
Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego
2018-07-01
In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.
Exponential error reduction in pretransfusion testing with automation.
South, Susan F; Casina, Tony S; Li, Lily
2012-08-01
Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.
Wang, Dongshu; Huang, Lihong; Tang, Longkun
2015-08-01
This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.
Bernard, Olivier; Alata, Olivier; Francaux, Marc
2006-03-01
Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.
2017-01-01
Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560
NASA Astrophysics Data System (ADS)
Vincenzo, F.; Matteucci, F.; Spitoni, E.
2017-04-01
We present a theoretical method for solving the chemical evolution of galaxies by assuming an instantaneous recycling approximation for chemical elements restored by massive stars and the delay time distribution formalism for delayed chemical enrichment by Type Ia Supernovae. The galaxy gas mass assembly history, together with the assumed stellar yields and initial mass function, represents the starting point of this method. We derive a simple and general equation, which closely relates the Laplace transforms of the galaxy gas accretion history and star formation history, which can be used to simplify the problem of retrieving these quantities in the galaxy evolution models assuming a linear Schmidt-Kennicutt law. We find that - once the galaxy star formation history has been reconstructed from our assumptions - the differential equation for the evolution of the chemical element X can be suitably solved with classical methods. We apply our model to reproduce the [O/Fe] and [Si/Fe] versus [Fe/H] chemical abundance patterns as observed at the solar neighbourhood by assuming a decaying exponential infall rate of gas and different delay time distributions for Type Ia Supernovae; we also explore the effect of assuming a non-linear Schmidt-Kennicutt law, with the index of the power law being k = 1.4. Although approximate, we conclude that our model with the single-degenerate scenario for Type Ia Supernovae provides the best agreement with the observed set of data. Our method can be used by other complementary galaxy stellar population synthesis models to predict also the chemical evolution of galaxies.
Nesi, Jacqueline; Choukas-Bradley, Sophia; Prinstein, Mitchell J
2018-04-07
Investigators have long recognized that adolescents' peer experiences provide a crucial context for the acquisition of developmental competencies, as well as potential risks for a range of adjustment difficulties. However, recent years have seen an exponential increase in adolescents' adoption of social media tools, fundamentally reshaping the landscape of adolescent peer interactions. Although research has begun to examine social media use among adolescents, researchers have lacked a unifying framework for understanding the impact of social media on adolescents' peer experiences. This paper represents Part 1 of a two-part theoretical review, in which we offer a transformation framework to integrate interdisciplinary social media scholarship and guide future work on social media use and peer relations from a theory-driven perspective. We draw on prior conceptualizations of social media as a distinct interpersonal context and apply this understanding to adolescents' peer experiences, outlining features of social media with particular relevance to adolescent peer relations. We argue that social media transforms adolescent peer relationships in five key ways: by changing the frequency or immediacy of experiences, amplifying experiences and demands, altering the qualitative nature of interactions, facilitating new opportunities for compensatory behaviors, and creating entirely novel behaviors. We offer an illustration of the transformation framework applied to adolescents' dyadic friendship processes (i.e., experiences typically occurring between two individuals), reviewing existing evidence and offering theoretical implications. Overall, the transformation framework represents a departure from the prevailing approaches of prior peer relations work and a new model for understanding peer relations in the social media context.
Henry, S M; Grbić-Galić, D
1991-01-01
Trichloroethylene (TCE)-transforming aquifer methanotrophs were evaluated for the influence of TCE oxidation toxicity and the effect of reductant availability on TCE transformation rates during methane starvation. TCE oxidation at relatively low (6 mg liter-1) TCE concentrations significantly reduced subsequent methane utilization in mixed and pure cultures tested and reduced the number of viable cells in the pure culture Methylomonas sp. strain MM2 by an order of magnitude. Perchloroethylene, tested at the same concentration, had no effect on the cultures. Neither the TCE itself nor the aqueous intermediates were responsible for the toxic effect, and it is suggested that TCE oxidation toxicity may have resulted from reactive intermediates that attacked cellular macromolecules. During starvation, all methanotrophs tested exhibited a decline in TCE transformation rates, and this decline followed exponential decay. Formate, provided as an exogenous electron donor, increased TCE transformation rates in Methylomonas sp. strain MM2, but not in mixed culture MM1 or unidentified isolate, CSC-1. Mixed culture MM2 did not transform TCE after 15 h of starvation, but mixed cultures MM1 and MM3 did. The methanotrophs in mixed cultures MM1 and MM3, and the unidentified isolate CSC-1 that was isolated from mixed culture MM1 contained lipid inclusions, whereas the methanotrophs of mixed culture MM2 and Methylomonas sp. strain MM2 did not. It is proposed that lipid storage granules serve as an endogenous source of electrons for TCE oxidation during methane starvation. Images PMID:2036010
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Decay rates of Gaussian-type I-balls and Bose-enhancement effects in 3+1 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yamada, Masaki; ICRR, University of Tokyo, Kashiwa, 277-8582
2014-02-03
I-balls/oscillons are long-lived spatially localized lumps of a scalar field which may be formed after inflation. In the scalar field theory with monomial potential nearly and shallower than quadratic, which is motivated by chaotic inflationary models and supersymmetric theories, the scalar field configuration of I-balls is approximately Gaussian. If the I-ball interacts with another scalar field, the I-ball eventually decays into radiation. Recently, it was pointed out that the decay rate of I-balls increases exponentially by the effects of Bose enhancement under some conditions and a non-perturbative method to compute the exponential growth rate has been derived. In this paper,more » we apply the method to the Gaussian-type I-ball in 3+1 dimensions assuming spherical symmetry, and calculate the partial decay rates into partial waves, labelled by the angular momentum of daughter particles. We reveal the conditions that the I-ball decays exponentially, which are found to depend on the mass and angular momentum of daughter particles and also be affected by the quantum uncertainty in the momentum of daughter particles.« less
Decay rates of Gaussian-type I-balls and Bose-enhancement effects in 3+1 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yamada, Masaki, E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: yamadam@icrr.u-tokyo.ac.jp
2014-02-01
I-balls/oscillons are long-lived spatially localized lumps of a scalar field which may be formed after inflation. In the scalar field theory with monomial potential nearly and shallower than quadratic, which is motivated by chaotic inflationary models and supersymmetric theories, the scalar field configuration of I-balls is approximately Gaussian. If the I-ball interacts with another scalar field, the I-ball eventually decays into radiation. Recently, it was pointed out that the decay rate of I-balls increases exponentially by the effects of Bose enhancement under some conditions and a non-perturbative method to compute the exponential growth rate has been derived. In this paper,more » we apply the method to the Gaussian-type I-ball in 3+1 dimensions assuming spherical symmetry, and calculate the partial decay rates into partial waves, labelled by the angular momentum of daughter particles. We reveal the conditions that the I-ball decays exponentially, which are found to depend on the mass and angular momentum of daughter particles and also be affected by the quantum uncertainty in the momentum of daughter particles.« less
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
NASA Astrophysics Data System (ADS)
Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang
2016-07-01
Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.
Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang
NASA Astrophysics Data System (ADS)
Ikasari, D. M.; Lestari, E. R.; Prastya, E.
2018-03-01
The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.
NASA Astrophysics Data System (ADS)
Bakhshi Khaniki, Hossein; Rajasekaran, Sundaramoorthy
2018-05-01
This study develops a comprehensive investigation on mechanical behavior of non-uniform bi-directional functionally graded beam sensors in the framework of modified couple stress theory. Material variation is modelled through both length and thickness directions using power-law, sigmoid and exponential functions. Moreover, beam is assumed with linear, exponential and parabolic cross-section variation through the length using power-law and sigmoid varying functions. Using these assumptions, a general model for microbeams is presented and formulated by employing Hamilton’s principle. Governing equations are solved using a mixed finite element method with Lagrangian interpolation technique, Gaussian quadrature method and Wilson’s Lagrangian multiplier method. It is shown that by using bi-directional functionally graded materials in nonuniform microbeams, mechanical behavior of such structures could be affected noticeably and scale parameter has a significant effect in changing the rigidity of nonuniform bi-directional functionally graded beams.
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard; Attele, Rohan; Koshak, William
2011-01-01
A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.
Shields, Beverley M; McDonald, Timothy J; Oram, Richard; Hill, Anita; Hudson, Michelle; Leete, Pia; Pearson, Ewan R; Richardson, Sarah J; Morgan, Noel G; Hattersley, Andrew T
2018-06-07
The decline in C-peptide in the 5 years after diagnosis of type 1 diabetes has been well studied, but little is known about the longer-term trajectory. We aimed to examine the association between log-transformed C-peptide levels and the duration of diabetes up to 40 years after diagnosis. We assessed the pattern of association between urinary C-peptide/creatinine ratio (UCPCR) and duration of diabetes in cross-sectional data from 1,549 individuals with type 1 diabetes using nonlinear regression approaches. Findings were replicated in longitudinal follow-up data for both UCPCR ( n = 161 individuals, 326 observations) and plasma C-peptide ( n = 93 individuals, 473 observations). We identified two clear phases of C-peptide decline: an initial exponential fall over 7 years (47% decrease/year [95% CI -51%, -43%]) followed by a stable period thereafter (+0.07%/year [-1.3, +1.5]). The two phases had similar durations and slopes in patients above and below the median age at diagnosis (10.8 years), although levels were lower in the younger patients irrespective of duration. Patterns were consistent in both longitudinal UCPCR ( n = 162; ≤7 years duration: -48%/year [-55%, -38%]; >7 years duration -0.1% [-4.1%, +3.9%]) and plasma C-peptide ( n = 93; >7 years duration only: -2.6% [-6.7%, +1.5%]). These data support two clear phases of C-peptide decline: an initial exponential fall over a 7-year period, followed by a prolonged stabilization where C-peptide levels no longer decline. Understanding the pathophysiological and immunological differences between these two phases will give crucial insights into understanding β-cell survival. © 2018 by the American Diabetes Association.
Optical and luminescence properties of Dy3+ ions in phosphate based glasses
NASA Astrophysics Data System (ADS)
Rasool, Sk. Nayab; Rama Moorthy, L.; Jayasankar, C. K.
2013-08-01
Phosphate glasses with compositions of 44P2O5 + 17K2O + 9Al2O3 + (30 - x)CaF2 + xDy2O3 (x = 0.05, 0.1, 0.5, 1.0, 2.0, 3.0 and 4.0 mol %) were prepared and characterized by X-ray diffraction (XRD), differential thermal analysis (DTA), Fourier transform infrared (FTIR), optical absorption, emission and decay measurements. The observed absorption bands were analyzed by using the free-ion Hamiltonian (HFI) model. The Judd-Ofelt (JO) analysis has been performed and the intensity parameters (Ωλ, λ = 2, 4, 6) were evaluated in order to predict the radiative properties of the excited states. From the emission spectra, the effective band widths (Δλeff), stimulated emission cross-sections (σ(λp)), yellow to blue (Y/B) intensity ratios and chromaticity color coordinates (x, y) have been determined. The fluorescence decays from the 4F9/2 level of Dy3+ ions were measured by monitoring the intense 4F9/2 → 6H15/2 transition (486 nm). The experimental lifetimes (τexp) are found to decrease with the increase of Dy3+ ions concentration due to the quenching process. The decay curves are perfectly single exponential at lower concentrations and gradually changes to non-exponential for higher concentrations. The non-exponential decay curves are well fitted to the Inokuti-Hirayama (IH) model for S = 6, which indicates that the energy transfer between the donor and acceptor is of dipole-dipole type. The systematic analysis of revealed that the energy transfer mechanism strongly depends on Dy3+ ions concentration and the host glass composition.
Modeling the degradation kinetics of ascorbic acid.
Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R
2018-06-13
Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.
NASA Astrophysics Data System (ADS)
Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun
2016-05-01
The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
Jang, Hae-Won; Ih, Jeong-Guon
2013-03-01
The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.
Kouass Sahbani, Saloua; Sanche, Leon; Cloutier, Pierre; Bass, Andrew D; Hunting, Darel J
2014-11-20
Low energy electrons (LEEs) of energies less than 20 eV are generated in large quantities by ionizing radiation in biological matter. While LEEs are known to induce single (SSBs) and double strand breaks (DSBs) in DNA, their ability to inactivate cells by inducing nonreparable lethal damage has not yet been demonstrated. Here we observe the effect of LEEs on the functionality of DNA, by measuring the efficiency of transforming Escherichia coli with a [pGEM-3Zf (-)] plasmid irradiated with 10 eV electrons. Highly ordered DNA films were prepared on pyrolitic graphite by molecular self-assembly using 1,3-diaminopropane ions (Dap(2+)). The uniformity of these films permits the inactivation of approximately 50% of the plasmids compared to <10% using previous methods, which is sufficient for the subsequent determination of their functionality. Upon LEE irradiation, the fraction of functional plasmids decreased exponentially with increasing electron fluence, while LEE-induced isolated base damage, frank DSB, and non DSB-cluster damage increased linearly with fluence. While DSBs can be toxic, their levels were too low to explain the loss of plasmid functionality observed upon LEE irradiation. Similarly, non-DSB cluster damage, revealed by transforming cluster damage into DSBs by digestion with repair enzymes, also occurred relatively infrequently. The exact nature of the lethal damage remains unknown, but it is probably a form of compact cluster damage in which the lesions are too close to be revealed by purified repair enzymes. In addition, this damage is either not repaired or is misrepaired by E. coli, since it results in plasmid inactivation, when they contain an average of three lesions. Comparison with previous results from a similar experiment performed with γ-irradiated plasmids indicates that the type of clustered DNA lesions, created directly on cellular DNA by LEEs, may be more difficult to repair than those produced by other species from radiolysis.
NASA Technical Reports Server (NTRS)
Block, Eli; Byemerwa, Jovita; Dispenza, Ross; Doughty, Benjamin; Gillyard, KaNesha; Godbole, Poorwa; Gonzales-Wright, Jeanette; Hull, Ian; Kannappan, Jotthe; Levine, Alexander;
2015-01-01
With the exponential growth of interest in unmanned aerial vehicles (UAVs) and their vast array of applications in both space exploration and terrestrial uses such as the delivery of medicine and monitoring the environment, the 2014 Stanford-Brown-Spelman iGEM team is pioneering the development of a fully biological UAV for scientific and humanitarian missions. The prospect of a biologically-produced UAV presents numerous advantages over the current manufacturing paradigm. First, a foundational architecture built by cells allows for construction or repair in locations where it would be difficult to bring traditional tools of production. Second, a major limitation of current research with UAVs is the size and high power consumption of analytical instruments, which require bulky electrical components and large fuselages to support their weight. By moving these functions into cells with biosensing capabilities – for example, a series of cells engineered to report GFP, green fluorescent protein, when conditions exceed a certain threshold concentration of a compound of interest, enabling their detection post-flight – these problems of scale can be avoided. To this end, we are working to engineer cells to synthesize cellulose acetate as a novel bioplastic, characterize biological methods of waterproofing the material, and program this material’s systemic biodegradation. In addition, we aim to use an “amberless” system to prevent horizontal gene transfer from live cells on the material to microorganisms in the flight environment. So far, we have: successfully transformed Gluconacetobacter hansenii, a cellulose-producing bacterium, with a series of promoters to test transformation efficiency before adding the acetylation genes; isolated protein bands present in the wasp nest material; transformed the cellulose-degrading genes into Escherichia coli; and we have confirmed that the amberless construct prevents protein expression in wild-type cells. In addition, as part of our human outreach project, we have been in touch with leaders in the fields of UAVs, synthetic biology, and earth sciences, and it is clear that biodegradable UAVs could have a significant impact on the industry.
NASA Technical Reports Server (NTRS)
Block, Eli; Byemerwa, Jovita; Dispenza, Ross; Doughty, Benjamin; Gillyard, KaNesha; Godbole, Poorwa; Gonzalez-Wright, Jeanette; Hull, Ian; Kannappan, Jotthe; Levine, Alexander;
2015-01-01
With the exponential growth of interest in unmanned aerial vehicles (UAVs) and their vast array of applications in both space exploration and terrestrial uses such as the delivery of medicine and monitoring the environment, the 2014 Stanford-Brown-Spelman iGEM team is pioneering the development of a fully biological UAV for scientific and humanitarian missions. The prospect of a biologically-produced UAV presents numerous advantages over the current manufacturing paradigm. First, a foundational architecture built by cells allows for construction or repair in locations where it would be difficult to bring traditional tools of production. Second, a major limitation of current research with UAVs is the size and high power consumption of analytical instruments, which require bulky electrical components and large fuselages to support their weight. By moving these functions into cells with biosensing capabilities - for example, a series of cells engineered to report GFP, green fluorescent protein, when conditions exceed a certain threshold concentration of a compound of interest, enabling their detection post-flight - these problems of scale can be avoided. To this end, we are working to engineer cells to synthesize cellulose acetate as a novel bioplastic, characterize biological methods of waterproofing the material, and program this material's systemic biodegradation. In addition, we aim to use an "amberless" system to prevent horizontal gene transfer from live cells on the material to microorganisms in the flight environment. So far, we have: successfully transformed Gluconacetobacter hansenii, a cellulose-producing bacterium, with a series of promoters to test transformation efficiency before adding the acetylation genes; isolated protein bands present in the wasp nest material; transformed the cellulose-degrading genes into Escherichia coli; and we have confirmed that the amberless construct prevents protein expression in wild-type cells. In addition, as part of our human outreach project, we have been in touch with leaders in the fields of UAVs, synthetic biology, and earth sciences, and it is clear that biodegradable UAVs could have a significant impact on the industry.
Discrete fourier transform (DFT) analysis for applications using iterative transform methods
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2012-01-01
According to various embodiments, a method is provided for determining aberration data for an optical system. The method comprises collecting a data signal, and generating a pre-transformation algorithm. The data is pre-transformed by multiplying the data with the pre-transformation algorithm. A discrete Fourier transform of the pre-transformed data is performed in an iterative loop. The method further comprises back-transforming the data to generate aberration data.
NASA Technical Reports Server (NTRS)
Diosady, Laslo; Murman, Scott; Blonigan, Patrick; Garai, Anirban
2017-01-01
Presented space-time adjoint solver for turbulent compressible flows. Confirmed failure of traditional sensitivity methods for chaotic flows. Assessed rate of exponential growth of adjoint for practical 3D turbulent simulation. Demonstrated failure of short-window sensitivity approximations.
Recombination-assisted megaprimer (RAM) cloning
Mathieu, Jacques; Alvarez, Emilia; Alvarez, Pedro J.J.
2014-01-01
No molecular cloning technique is considered universally reliable, and many suffer from being too laborious, complex, or expensive. Restriction-free cloning is among the simplest, most rapid, and cost-effective methods, but does not always provide successful results. We modified this method to enhance its success rate through the use of exponential amplification coupled with homologous end-joining. This new method, recombination-assisted megaprimer (RAM) cloning, significantly extends the application of restriction-free cloning, and allows efficient vector construction with much less time and effort when restriction-free cloning fails to provide satisfactory results. The following modifications were made to the protocol:•Limited number of PCR cycles for both megaprimer synthesis and the cloning reaction to reduce error propagation.•Elimination of phosphorylation and ligation steps previously reported for cloning methods that used exponential amplification, through the inclusion of a reverse primer in the cloning reaction with a 20 base pair region of homology to the forward primer.•The inclusion of 1 M betaine to enhance both reaction specificity and yield. PMID:26150930
Two-dimensional frequency-domain acoustic full-waveform inversion with rugged topography
NASA Astrophysics Data System (ADS)
Zhang, Qian-Jiang; Dai, Shi-Kun; Chen, Long-Wei; Li, Kun; Zhao, Dong-Dong; Huang, Xing-Xing
2015-09-01
We studied finite-element-method-based two-dimensional frequency-domain acoustic FWI under rugged topography conditions. The exponential attenuation boundary condition suitable for rugged topography is proposed to solve the cutoff boundary problem as well as to consider the requirement of using the same subdivision grid in joint multifrequency inversion. The proposed method introduces the attenuation factor, and by adjusting it, acoustic waves are sufficiently attenuated in the attenuation layer to minimize the cutoff boundary effect. Based on the law of exponential attenuation, expressions for computing the attenuation factor and the thickness of attenuation layers are derived for different frequencies. In multifrequency-domain FWI, the conjugate gradient method is used to solve equations in the Gauss-Newton algorithm and thus minimize the computation cost in calculating the Hessian matrix. In addition, the effect of initial model selection and frequency combination on FWI is analyzed. Examples using numerical simulations and FWI calculations are used to verify the efficiency of the proposed method.
A multigrid solver for the semiconductor equations
NASA Technical Reports Server (NTRS)
Bachmann, Bernhard
1993-01-01
We present a multigrid solver for the exponential fitting method. The solver is applied to the current continuity equations of semiconductor device simulation in two dimensions. The exponential fitting method is based on a mixed finite element discretization using the lowest-order Raviart-Thomas triangular element. This discretization method yields a good approximation of front layers and guarantees current conservation. The corresponding stiffness matrix is an M-matrix. 'Standard' multigrid solvers, however, cannot be applied to the resulting system, as this is dominated by an unsymmetric part, which is due to the presence of strong convection in part of the domain. To overcome this difficulty, we explore the connection between Raviart-Thomas mixed methods and the nonconforming Crouzeix-Raviart finite element discretization. In this way we can construct nonstandard prolongation and restriction operators using easily computable weighted L(exp 2)-projections based on suitable quadrature rules and the upwind effects of the discretization. The resulting multigrid algorithm shows very good results, even for real-world problems and for locally refined grids.
Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S
2014-11-01
Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than 15-20%. Histological fat fraction measurements in ten patients were highly correlated with calibrated MRI fat fraction measurements (Pearson correlation coefficient r = 0.90 with P = 0.0004). Liver MRI using multi-point Dixon with multi-fat-peak and bi-exponential T2* modeling provided accurate fat quantification in children and young adults with non-alcoholic fatty liver disease and may be used to screen at-risk or affected individuals and to monitor disease progress noninvasively.
Piecewise exponential survival times and analysis of case-cohort data.
Li, Yan; Gail, Mitchell H; Preston, Dale L; Graubard, Barry I; Lubin, Jay H
2012-06-15
Case-cohort designs select a random sample of a cohort to be used as control with cases arising from the follow-up of the cohort. Analyses of case-cohort studies with time-varying exposures that use Cox partial likelihood methods can be computer intensive. We propose a piecewise-exponential approach where Poisson regression model parameters are estimated from a pseudolikelihood and the corresponding variances are derived by applying Taylor linearization methods that are used in survey research. The proposed approach is evaluated using Monte Carlo simulations. An illustration is provided using data from the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study of male smokers in Finland, where a case-cohort study of serum glucose level and pancreatic cancer was analyzed. Copyright © 2012 John Wiley & Sons, Ltd.
Web-based application on employee performance assessment using exponential comparison method
NASA Astrophysics Data System (ADS)
Maryana, S.; Kurnia, E.; Ruyani, A.
2017-02-01
Employee performance assessment is also called a performance review, performance evaluation, or assessment of employees, is an effort to assess the achievements of staffing performance with the aim to increase productivity of employees and companies. This application helps in the assessment of employee performance using five criteria: Presence, Quality of Work, Quantity of Work, Discipline, and Teamwork. The system uses the Exponential Comparative Method and Weighting Eckenrode. Calculation results using graphs were provided to see the assessment of each employee. Programming language used in this system is written in Notepad++ and MySQL database. The testing result on the system can be concluded that this application is correspond with the design and running properly. The test conducted is structural test, functional test, and validation, sensitivity analysis, and SUMI testing.
Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian
2012-09-01
This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.
Sriraam, N.
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238
Sriraam, N
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.
NASA Astrophysics Data System (ADS)
Lino-Zapata, F. M.; Yan, H. L.; Ríos-Jara, D.; Sánchez Llamazares, J. L.; Zhang, Y. D.; Zhao, X.; Zuo, L.
2018-01-01
The kinetic arrest (KA) of martensitic transformation (MT) observed in Ni45Co5Mn36.8In13.2 melt-spun ribbons has been studied. These alloy ribbons show an ordered columnar-like grain microstructure with the longer grain axis growing perpendicular to ribbon plane and transform martensitically from a single austenitic (AST) parent phase with the L21-type crystal structure to a monoclinic incommensurate 6 M modulated martensite (MST). Results show that the volume fraction of austenite frozen into the martensitic matrix is proportional to the applied magnetic field. A fully arrest of the structural transition is found for a magnetic field of 7 T. The metastable character of the non-equilibrium field-cooled glassy state was characterized by introducing thermal and magnetic field fluctuations or measuring the relaxation of magnetization. The relaxation of magnetization from a field-cooled kinetically arrested state at 5 and 7 T follows the Kohlrausch-Williams-Watts (KWW) stretched exponential function with a β exponent around 0.95 indicating the weak metastable nature of the system under the strong magnetic fields. The relationship between the occurrence of exchange bias and the frozen fraction of AST into the MST matrix was studied.
Wavelet processing techniques for digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
NASA Astrophysics Data System (ADS)
Afonso, J. C.; Zlotnik, S.; Diez, P.
2015-12-01
We present a flexible, general and efficient approach for implementing thermodynamic phase equilibria information (in the form of sets of physical parameters) into geophysical and geodynamic studies. The approach is based on multi-dimensional decomposition methods, which transform the original multi-dimensional discrete information into a dimensional-separated representation. This representation has the property of increasing the number of coefficients to be stored linearly with the number of dimensions (opposite to a full multi-dimensional cube requiring exponential storage depending on the number of dimensions). Thus, the amount of information to be stored in memory during a numerical simulation or geophysical inversion is drastically reduced. Accordingly, the amount and resolution of the thermodynamic information that can be used in a simulation or inversion increases substantially. In addition, the method is independent of the actual software used to obtain the primary thermodynamic information, and therefore it can be used in conjunction with any thermodynamic modeling program and/or database. Also, the errors associated with the decomposition procedure are readily controlled by the user, depending on her/his actual needs (e.g. preliminary runs vs full resolution runs). We illustrate the benefits, generality and applicability of our approach with several examples of practical interest for both geodynamic modeling and geophysical inversion/modeling. Our results demonstrate that the proposed method is a competitive and attractive candidate for implementing thermodynamic constraints into a broad range of geophysical and geodynamic studies.
Exponential Formulae and Effective Operations
NASA Technical Reports Server (NTRS)
Mielnik, Bogdan; Fernandez, David J. C.
1996-01-01
One of standard methods to predict the phenomena of squeezing consists in splitting the unitary evolution operator into the product of simpler operations. The technique, while mathematically general, is not so simple in applications and leaves some pragmatic problems open. We report an extended class of exponential formulae, which yield a quicker insight into the laboratory details for a class of squeezing operations, and moreover, can be alternatively used to programme different type of operations, as: (1) the free evolution inversion; and (2) the soft simulations of the sharp kicks (so that all abstract results involving the kicks of the oscillator potential, become realistic laboratory prescriptions).