Equivalent linearization for fatigue life estimates of a nonlinear structure
NASA Technical Reports Server (NTRS)
Miles, R. N.
1989-01-01
An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.
NASA Technical Reports Server (NTRS)
Mickens, R. E.
1985-01-01
The classical method of equivalent linearization is extended to a particular class of nonlinear difference equations. It is shown that the method can be used to obtain an approximation of the periodic solutions of these equations. In particular, the parameters of the limit cycle and the limit points can be determined. Three examples illustrating the method are presented.
ERIC Educational Resources Information Center
Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.
Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Gauge invariance of excitonic linear and nonlinear optical response
NASA Astrophysics Data System (ADS)
Taghizadeh, Alireza; Pedersen, T. G.
2018-05-01
We study the equivalence of four different approaches to calculate the excitonic linear and nonlinear optical response of multiband semiconductors. These four methods derive from two choices of gauge, i.e., length and velocity gauges, and two ways of computing the current density, i.e., direct evaluation and evaluation via the time-derivative of the polarization density. The linear and quadratic response functions are obtained for all methods by employing a perturbative density-matrix approach within the mean-field approximation. The equivalence of all four methods is shown rigorously, when a correct interaction Hamiltonian is employed for the velocity gauge approaches. The correct interaction is written as a series of commutators containing the unperturbed Hamiltonian and position operators, which becomes equivalent to the conventional velocity gauge interaction in the limit of infinite Coulomb screening and infinitely many bands. As a case study, the theory is applied to hexagonal boron nitride monolayers, and the linear and nonlinear optical response found in different approaches are compared.
A single-degree-of-freedom model for non-linear soil amplification
Erdik, Mustafa Ozder
1979-01-01
For proper understanding of soil behavior during earthquakes and assessment of a realistic surface motion, studies of the large-strain dynamic response of non-linear hysteretic soil systems are indispensable. Most of the presently available studies are based on the assumption that the response of a soil deposit is mainly due to the upward propagation of horizontally polarized shear waves from the underlying bedrock. Equivalent-linear procedures, currently in common use in non-linear soil response analysis, provide a simple approach and have been favorably compared with the actual recorded motions in some particular cases. Strain compatibility in these equivalent-linear approaches is maintained by selecting values of shear moduli and damping ratios in accordance with the average soil strains, in an iterative manner. Truly non-linear constitutive models with complete strain compatibility have also been employed. The equivalent-linear approaches often raise some doubt as to the reliability of their results concerning the system response in high frequency regions. In these frequency regions the equivalent-linear methods may underestimate the surface motion by as much as a factor of two or more. Although studies are complete in their methods of analysis, they inevitably provide applications pertaining only to a few specific soil systems, and do not lead to general conclusions about soil behavior. This report attempts to provide a general picture of the soil response through the use of a single-degree-of-freedom non-linear-hysteretic model. Although the investigation is based on a specific type of nonlinearity and a set of dynamic soil properties, the method described does not limit itself to these assumptions and is equally applicable to other types of nonlinearity and soil parameters.
Code of Federal Regulations, 2011 CFR
2011-07-01
... followed by a gravimetric mass determination, but which is not a Class I equivalent method because of... MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.1 Definitions. Terms used but not defined... slope of a linear plot fitted to corresponding candidate and reference method mean measurement data...
New Results on the Linear Equating Methods for the Non-Equivalent-Groups Design
ERIC Educational Resources Information Center
von Davier, Alina A.
2008-01-01
The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear…
Baldwin, Alex S.; Baker, Daniel H.; Hess, Robert F.
2016-01-01
The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system’s input has on its output one can estimate the variance of this internal noise. By applying this simple “linear amplifier” model to the human visual system, one can entirely explain an observer’s detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system’s internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies. PMID:26953796
Baldwin, Alex S; Baker, Daniel H; Hess, Robert F
2016-01-01
The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system's input has on its output one can estimate the variance of this internal noise. By applying this simple "linear amplifier" model to the human visual system, one can entirely explain an observer's detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system's internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Mesh Deformation Based on Fully Stressed Design: The Method and Two-Dimensional Examples
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Chang, Chau-Lyan
2007-01-01
Mesh deformation in response to redefined boundary geometry is a frequently encountered task in shape optimization and analysis of fluid-structure interaction. We propose a simple and concise method for deforming meshes defined with three-node triangular or four-node tetrahedral elements. The mesh deformation method is suitable for large boundary movement. The approach requires two consecutive linear elastic finite-element analyses of an isotropic continuum using a prescribed displacement at the mesh boundaries. The first analysis is performed with homogeneous elastic property and the second with inhomogeneous elastic property. The fully stressed design is employed with a vanishing Poisson s ratio and a proposed form of equivalent strain (modified Tresca equivalent strain) to calculate, from the strain result of the first analysis, the element-specific Young s modulus for the second analysis. The theoretical aspect of the proposed method, its convenient numerical implementation using a typical linear elastic finite-element code in conjunction with very minor extra coding for data processing, and results for examples of large deformation of two-dimensional meshes are presented in this paper. KEY WORDS: Mesh deformation, shape optimization, fluid-structure interaction, fully stressed design, finite-element analysis, linear elasticity, strain failure, equivalent strain, Tresca failure criterion
Estimation of hysteretic damping of structures by stochastic subspace identification
NASA Astrophysics Data System (ADS)
Bajrić, Anela; Høgsberg, Jan
2018-05-01
Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.
Andreasen, Nancy C; Pressler, Marcus; Nopoulos, Peg; Miller, Del; Ho, Beng-Choon
2010-02-01
A standardized quantitative method for comparing dosages of different drugs is a useful tool for designing clinical trials and for examining the effects of long-term medication side effects such as tardive dyskinesia. Such a method requires establishing dose equivalents. An expert consensus group has published charts of equivalent doses for various antipsychotic medications for first- and second-generation medications. These charts were used in this study. Regression was used to compare each drug in the experts' charts to chlorpromazine and haloperidol and to create formulas for each relationship. The formulas were solved for chlorpromazine 100 mg and haloperidol 2 mg to derive new chlorpromazine and haloperidol equivalents. The formulas were incorporated into our definition of dose-years such that 100 mg/day of chlorpromazine equivalent or 2 mg/day of haloperidol equivalent taken for 1 year is equal to one dose-year. All comparisons to chlorpromazine and haloperidol were highly linear with R(2) values greater than .9. A power transformation further improved linearity. By deriving a unique formula that converts doses to chlorpromazine or haloperidol equivalents, we can compare otherwise dissimilar drugs. These equivalents can be multiplied by the time an individual has been on a given dose to derive a cumulative value measured in dose-years in the form of (chlorpromazine equivalent in mg) x (time on dose measured in years). After each dose has been converted to dose-years, the results can be summed to provide a cumulative quantitative measure of lifetime exposure. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Biological effects and equivalent doses in radiotherapy: A software solution
Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline
2013-01-01
Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319
On the equivalence of case-crossover and time series methods in environmental epidemiology.
Lu, Yun; Zeger, Scott L
2007-04-01
The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.
Slope stability analysis using limit equilibrium method in nonlinear criterion.
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.
Slope Stability Analysis Using Limit Equilibrium Method in Nonlinear Criterion
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci, and the parameter of intact rock m i. There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i, F decreases first and then increases. PMID:25147838
Method for extracting long-equivalent wavelength interferometric information
NASA Technical Reports Server (NTRS)
Hochberg, Eric B. (Inventor)
1991-01-01
A process for extracting long-equivalent wavelength interferometric information from a two-wavelength polychromatic or achromatic interferometer. The process comprises the steps of simultaneously recording a non-linear sum of two different frequency visible light interferograms on a high resolution film and then placing the developed film in an optical train for Fourier transformation, low pass spatial filtering and inverse transformation of the film image to produce low spatial frequency fringes corresponding to a long-equivalent wavelength interferogram. The recorded non-linear sum irradiance derived from the two-wavelength interferometer is obtained by controlling the exposure so that the average interferogram irradiance is set at either the noise level threshold or the saturation level threshold of the film.
Section Preequating under the Equivalent Groups Design without IRT
ERIC Educational Resources Information Center
Guo, Hongwen; Puhan, Gautam
2014-01-01
In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…
NASA Technical Reports Server (NTRS)
Tyson, R. W.; Muraca, R. J.
1975-01-01
The local linearization method for axisymmetric flow is combined with the transonic equivalence rule to calculate pressure distribution on slender bodies at free-stream Mach numbers from .8 to 1.2. This is an approximate solution to the transonic flow problem which yields results applicable during the preliminary design stages of a configuration development. The method can be used to determine the aerodynamic loads on parabolic arc bodies having either circular or elliptical cross sections. It is particularly useful in predicting pressure distributions and normal force distributions along the body at small angles of attack. The equations discussed may be extended to include wing-body combinations.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.
2003-01-01
The use of stress predictions from equivalent linearization analyses in the computation of high-cycle fatigue life is examined. Stresses so obtained differ in behavior from the fully nonlinear analysis in both spectral shape and amplitude. Consequently, fatigue life predictions made using this data will be affected. Comparisons of fatigue life predictions based upon the stress response obtained from equivalent linear and numerical simulation analyses are made to determine the range over which the equivalent linear analysis is applicable.
Zhou, Gaochao; Tao, Xudong; Shen, Ze; Zhu, Guanghao; Jin, Biaobing; Kang, Lin; Xu, Weiwei; Chen, Jian; Wu, Peiheng
2016-01-01
We propose a kind of general framework for the design of a perfect linear polarization converter that works in the transmission mode. Using an intuitive picture that is based on the method of bi-directional polarization mode decomposition, it is shown that when the device under consideration simultaneously possesses two complementary symmetry planes, with one being equivalent to a perfect electric conducting surface and the other being equivalent to a perfect magnetic conducting surface, linear polarization conversion can occur with an efficiency of 100% in the absence of absorptive losses. The proposed framework is validated by two design examples that operate near 10 GHz, where the numerical, experimental and analytic results are in good agreements. PMID:27958313
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
Bernard, A M; Burgot, J L
1981-12-01
The reversibility of the determination reaction is the most frequent cause of deviations from linearity of thermometric titration curves. Because of this, determination of the equivalence point by the tangent method is associated with a systematic error. The authors propose a relationship which connects this error quantitatively with the equilibrium constant. The relation, verified experimentally, is deduced from a mathematical study of the thermograms and could probably be generalized to apply to other linear methods of determination.
Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima
2014-01-01
Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Advanced analysis technique for the evaluation of linear alternators and linear motors
NASA Technical Reports Server (NTRS)
Holliday, Jeffrey C.
1995-01-01
A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.
Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Boskovic, Jovan D.
2008-01-01
This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.
Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.
1981-01-01
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
NASA Astrophysics Data System (ADS)
Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin
1997-06-01
A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.
Digital photography and transparency-based methods for measuring wound surface area.
Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh
2013-04-01
To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.
NASA Technical Reports Server (NTRS)
Haynes, Davy A.; Miller, David S.; Klein, John R.; Louie, Check M.
1988-01-01
A method by which a simple equivalent faired body can be designed to replace a more complex body with flowing inlets has been demonstrated for supersonic flow. An analytically defined, geometrically simple faired inlet forebody has been designed using a linear potential code to generate flow perturbations equivalent to those produced by a much more complex forebody with inlets. An equivalent forebody wind-tunnel model was fabricated and a test was conducted in NASA Langley Research Center's Unitary Plan Wind Tunnel. The test Mach number range was 1.60 to 2.16 for angles of attack of -4 to 16 deg. Test results indicate that, for the purposes considered here, the equivalent forebody simulates the original flowfield disturbances to an acceptable degree of accuracy.
Equivalence of quantum Boltzmann equation and Kubo formula for dc conductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Z.B.; Chen, L.Y.
1990-02-01
This paper presents a derivation of the quantum Boltzmann equation for linear dc transport with a correction term to Mahan-Hansch's equations and derive a formal solution to it. Based on this formal solution, the authors find the electric conductivity can be expressed as the retarded current-current correlation. Therefore, the authors explicitly demonstrate the equivalence of the two most important theoretical methods: quantum Boltzmann equation and Kubo formula.
NASA Technical Reports Server (NTRS)
Ottander, John A.; Hall, Robert A.; Powers, J. F.
2018-01-01
A method is presented that allows for the prediction of the magnitude of limit cycles due to adverse control-slosh interaction in liquid propelled space vehicles using non-linear slosh damping. Such a method is an alternative to the industry practice of assuming linear damping and relying on: mechanical slosh baffles to achieve desired stability margins; accepting minimal slosh stability margins; or time domain non-linear analysis to accept time periods of poor stability. Sinusoidal input describing functional analysis is used to develop a relationship between the non-linear slosh damping and an equivalent linear damping at a given slosh amplitude. In addition, a more accurate analytical prediction of the danger zone for slosh mass locations in a vehicle under proportional and derivative attitude control is presented. This method is used in the control-slosh stability analysis of the NASA Space Launch System.
Mengerink, Y; Peters, R; Kerkhoff, M; Hellenbrand, J; Omloo, H; Andrien, J; Vestjens, M; van der Wal, S
2000-05-05
By separating the first six linear and cyclic oligomers of polyamide-6 on a reversed-phase high-performance liquid chromatographic system after sandwich injection, quantitative determination of these oligomers becomes feasible. Low-wavelength UV detection of the different oligomers and selective post-column reaction detection of the linear oligomers with o-phthalic dicarboxaldehyde (OPA) and 3-mercaptopropionic acid (3-MPA) are discussed. A general methodology for quantification of oligomers in polymers was developed. It is demonstrated that the empirically determined group-equivalent absorption coefficients and quench factors are a convenient way of quantifying linear and cyclic oligomers of nylon-6. The overall long-term performance of the method was studied by monitoring a reference sample and the calibration factors of the linear and cyclic oligomers.
Flatness-based control and Kalman filtering for a continuous-time macroeconomic model
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.
2017-11-01
The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
Ellingson, Laura D; Hibbing, Paul R; Kim, Youngwon; Frey-Law, Laura A; Saint-Maurice, Pedro F; Welk, Gregory J
2017-06-01
The wrist is increasingly being used as the preferred site for objectively assessing physical activity but the relative accuracy of processing methods for wrist data has not been determined. This study evaluates the validity of four processing methods for wrist-worn ActiGraph (AG) data against energy expenditure (EE) measured using a portable metabolic analyzer (OM; Oxycon mobile) and the Compendium of physical activity. Fifty-one adults (ages 18-40) completed 15 activities ranging from sedentary to vigorous in a laboratory setting while wearing an AG and the OM. Estimates of EE and categorization of activity intensity were obtained from the AG using a linear method based on Hildebrand cutpoints (HLM), a non-linear modification of this method (HNLM), and two methods developed by Staudenmayer based on a Linear Model (SLM) and using random forest (SRF). Estimated EE and classification accuracy were compared to the OM and Compendium using Bland-Altman plots, equivalence testing, mean absolute percent error (MAPE), and Kappa statistics. Overall, classification agreement with the Compendium was similar across methods ranging from a Kappa of 0.46 (HLM) to 0.54 (HNLM). However, specificity and sensitivity varied by method and intensity, ranging from a sensitivity of 0% (HLM for sedentary) to a specificity of ~99% for all methods for vigorous. None of the methods was significantly equivalent to the OM (p > 0.05). Across activities, none of the methods evaluated had a high level of agreement with criterion measures. Additional research is needed to further refine the accuracy of processing wrist-worn accelerometer data.
Zhan, Yu; Liu, Changsheng; Zhang, Fengpeng; Qiu, Zhaoguo
2016-07-01
The laser ultrasonic generation of Rayleigh surface wave and longitudinal wave in an elastic plate is studied by experiment and finite element method. In order to eliminate the measurement error and the time delay of the experimental system, the linear fitting method of experimental data is applied. The finite element analysis software ABAQUS is used to simulate the propagation of Rayleigh surface wave and longitudinal wave caused by laser excitation on a sheet metal sample surface. The equivalent load method is proposed and applied. The pulsed laser is equivalent to the surface load in time and space domain to meet the Gaussian profile. The relationship between the physical parameters of the laser and the load is established by the correction factor. The numerical solution is in good agreement with the experimental result. The simple and effective numerical and experimental methods for laser ultrasonic measurement of the elastic constants are demonstrated. Copyright © 2016. Published by Elsevier B.V.
Effects of Optical Blur Reduction on Equivalent Intrinsic Blur
Valeshabad, Ali Kord; Wanek, Justin; McAnany, J. Jason; Shahidi, Mahnaz
2015-01-01
Purpose To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Methods Twelve visually normal individuals (age; 31 ± 12 years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) due to high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. Results σopt and σint were significantly reduced and visual acuity (VA) was significantly improved after AO correction (P ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, P ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (P = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, P < 0.001) and the two parameters were related linearly with a slope of 0.46. Conclusions Reduction in equivalent intrinsic blur was greater than the reduction in optical blur due to AO correction of wavefront error. This finding implies that VA in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone. PMID:25785538
Data-Driven Method to Estimate Nonlinear Chemical Equivalence.
Mayo, Michael; Collier, Zachary A; Winton, Corey; Chappell, Mark A
2015-01-01
There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of "equivalency factors," which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or "biphasic," responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are "parallel," which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach.
Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2000-01-01
A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.
NASA Astrophysics Data System (ADS)
Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin
2017-03-01
After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.
van der Vorm, Lisa N; Hendriks, Jan C M; Laarakkers, Coby M; Klaver, Siem; Armitage, Andrew E; Bamberg, Alison; Geurts-Moespot, Anneke J; Girelli, Domenico; Herkert, Matthias; Itkonen, Outi; Konrad, Robert J; Tomosugi, Naohisa; Westerman, Mark; Bansal, Sukhvinder S; Campostrini, Natascia; Drakesmith, Hal; Fillet, Marianne; Olbina, Gordana; Pasricha, Sant-Rayn; Pitts, Kelly R; Sloan, John H; Tagliaro, Franco; Weykamp, Cas W; Swinkels, Dorine W
2016-07-01
Absolute plasma hepcidin concentrations measured by various procedures differ substantially, complicating interpretation of results and rendering reference intervals method dependent. We investigated the degree of equivalence achievable by harmonization and the identification of a commutable secondary reference material to accomplish this goal. We applied technical procedures to achieve harmonization developed by the Consortium for Harmonization of Clinical Laboratory Results. Eleven plasma hepcidin measurement procedures (5 mass spectrometry based and 6 immunochemical based) quantified native individual plasma samples (n = 32) and native plasma pools (n = 8) to assess analytical performance and current and achievable equivalence. In addition, 8 types of candidate reference materials (3 concentrations each, n = 24) were assessed for their suitability, most notably in terms of commutability, to serve as secondary reference material. Absolute hepcidin values and reproducibility (intrameasurement procedure CVs 2.9%-8.7%) differed substantially between measurement procedures, but all were linear and correlated well. The current equivalence (intermeasurement procedure CV 28.6%) between the methods was mainly attributable to differences in calibration and could thus be improved by harmonization with a common calibrator. Linear regression analysis and standardized residuals showed that a candidate reference material consisting of native lyophilized plasma with cryolyoprotectant was commutable for all measurement procedures. Mathematically simulated harmonization with this calibrator resulted in a maximum achievable equivalence of 7.7%. The secondary reference material identified in this study has the potential to substantially improve equivalence between hepcidin measurement procedures and contributes to the establishment of a traceability chain that will ultimately allow standardization of hepcidin measurement results. © 2016 American Association for Clinical Chemistry.
Bobo-García, Gloria; Davidov-Pardo, Gabriel; Arroqui, Cristina; Vírseda, Paloma; Marín-Arroyo, María R; Navarro, Montserrat
2015-01-01
Total phenolic content (TPC) and antioxidant activity (AA) assays in microplates save resources and time, therefore they can be useful to overcome the fact that the conventional methods are time-consuming, labour intensive and use large amounts of reagents. An intra-laboratory validation of the Folin-Ciocalteu microplate method to measure TPC and the 2,2-diphenyl-1-picrylhydrazyl (DPPH) microplate method to measure AA was performed and compared with conventional spectrophotometric methods. To compare the TPC methods, the confidence intervals of a linear regression were used. In the range of 10-70 mg L(-1) of gallic acid equivalents (GAE), both methods were equivalent. To compare the AA methodologies, the F-test and t-test were used in a range from 220 to 320 µmol L(-1) of Trolox equivalents. Both methods had homogeneous variances, and the means were not significantively different. The limits of detection and quantification for the TPC microplate method were 0.74 and 2.24 mg L(-1) GAE and for the DPPH 12.07 and 36.58 µmol L(-1) of Trolox equivalents. The relative standard deviation of the repeatability and reproducibility for both microplate methods were ≤ 6.1%. The accuracy ranged from 88% to 100%. The microplate and the conventional methods are equals in a 95% confidence level. © 2014 Society of Chemical Industry.
Thermospheric dynamics - A system theory approach
NASA Technical Reports Server (NTRS)
Codrescu, M.; Forbes, J. M.; Roble, R. G.
1990-01-01
A system theory approach to thermospheric modeling is developed, based upon a linearization method which is capable of preserving nonlinear features of a dynamical system. The method is tested using a large, nonlinear, time-varying system, namely the thermospheric general circulation model (TGCM) of the National Center for Atmospheric Research. In the linearized version an equivalent system, defined for one of the desired TGCM output variables, is characterized by a set of response functions that is constructed from corresponding quasi-steady state and unit sample response functions. The linearized version of the system runs on a personal computer and produces an approximation of the desired TGCM output field height profile at a given geographic location.
Vitello, Dominic J; Ripper, Richard M; Fettiplace, Michael R; Weinberg, Guy L; Vitello, Joseph M
2015-01-01
Purpose. The gravimetric method of weighing surgical sponges is used to quantify intraoperative blood loss. The dry mass minus the wet mass of the gauze equals the volume of blood lost. This method assumes that the density of blood is equivalent to water (1 gm/mL). This study's purpose was to validate the assumption that the density of blood is equivalent to water and to correlate density with hematocrit. Methods. 50 µL of whole blood was weighed from eighteen rats. A distilled water control was weighed for each blood sample. The averages of the blood and water were compared utilizing a Student's unpaired, one-tailed t-test. The masses of the blood samples and the hematocrits were compared using a linear regression. Results. The average mass of the eighteen blood samples was 0.0489 g and that of the distilled water controls was 0.0492 g. The t-test showed P = 0.2269 and R (2) = 0.03154. The hematocrit values ranged from 24% to 48%. The linear regression R (2) value was 0.1767. Conclusions. The R (2) value comparing the blood and distilled water masses suggests high correlation between the two populations. Linear regression showed the hematocrit was not proportional to the mass of the blood. The study confirmed that the measured density of blood is similar to water.
Recursion Removal as an Instructional Method to Enhance the Understanding of Recursion Tracing
ERIC Educational Resources Information Center
Velázquez-Iturbide, J. Ángel; Castellanos, M. Eugenia; Hijón-Neira, Raquel
2016-01-01
Recursion is one of the most difficult programming topics for students. In this paper, an instructional method is proposed to enhance students' understanding of recursion tracing. The proposal is based on the use of rules to translate linear recursion algorithms into equivalent, iterative ones. The paper has two main contributions: the…
ERIC Educational Resources Information Center
von Davier, Alina A.; Holland, Paul W.; Livingston, Samuel A.; Casabianca, Jodi; Grant, Mary C.; Martin, Kathleen
2006-01-01
This study examines how closely the kernel equating (KE) method (von Davier, Holland, & Thayer, 2004a) approximates the results of other observed-score equating methods--equipercentile and linear equatings. The study used pseudotests constructed of item responses from a real test to simulate three equating designs: an equivalent groups (EG)…
Stochastic stability properties of jump linear systems
NASA Technical Reports Server (NTRS)
Feng, Xiangbo; Loparo, Kenneth A.; Ji, Yuandong; Chizeck, Howard J.
1992-01-01
Jump linear systems are defined as a family of linear systems with randomly jumping parameters (usually governed by a Markov jump process) and are used to model systems subject to failures or changes in structure. The authors study stochastic stability properties in jump linear systems and the relationship among various moment and sample path stability properties. It is shown that all second moment stability properties are equivalent and are sufficient for almost sure sample path stability, and a testable necessary and sufficient condition for second moment stability is derived. The Lyapunov exponent method for the study of almost sure sample stability is discussed, and a theorem which characterizes the Lyapunov exponents of jump linear systems is presented.
Ramsahoi, L; Gao, A; Fabri, M; Odumeru, J A
2011-07-01
Automated electronic milk analyzers for rapid enumeration of total bacteria counts (TBC) are widely used for raw milk testing by many analytical laboratories worldwide. In Ontario, Canada, Bactoscan flow cytometry (BsnFC; Foss Electric, Hillerød, Denmark) is the official anchor method for TBC in raw cow milk. Penalties are levied at the BsnFC equivalent level of 50,000 cfu/mL, the standard plate count (SPC) regulatory limit. This study was conducted to assess the BsnFC for TBC in raw goat milk, to determine the mathematical relationship between the SPC and BsnFC methods, and to identify probable reasons for the difference in the SPC:BsnFC equivalents for goat and cow milks. Test procedures were conducted according to International Dairy Federation Bulletin guidelines. Approximately 115 farm bulk tank milk samples per month were tested for inhibitor residues, SPC, BsnFC, psychrotrophic bacteria count, composition (fat, protein, lactose, lactose and other solids, and freezing point), and somatic cell count from March 2009 to February 2010. Data analysis of the results for the samples tested indicated that the BsnFC method would be a good alternative to the SPC method, providing accurate and more precise results with a faster turnaround time. Although a linear regression model showed good correlation and prediction, tests for linearity indicated that the relationship was linear only beyond log 4.1 SPC. The logistic growth curve best modeled the relationship between the SPC and BsnFC for the entire sample population. The BsnFC equivalent to the SPC 50,000 cfu/mL regulatory limit was estimated to be 321,000 individual bacteria count (ibc)/mL. This estimate differs considerably from the BsnFC equivalent for cow milk (121,000 ibc/mL). Because of the low frequency of bulk tank milk pickups at goat farms, 78.5% of the samples had their oldest milking in the tank to be 6.5 to 9.0 d old when tested, compared with the cow milk samples, which had their oldest milking at 4 d old when tested. This may be one of the major factors contributing to the larger goat milk BsnFC equivalence. Correlations and interactions between various test results were also discussed to further understand differences between the 2 methods for goat and cow milks. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Discriminative components of data.
Peltonen, Jaakko; Kaski, Samuel
2005-01-01
A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.
Panel Flutter Emulation Using a Few Concentrated Forces
NASA Astrophysics Data System (ADS)
Dhital, Kailash; Han, Jae-Hung
2018-04-01
The objective of this paper is to study the feasibility of panel flutter emulation using a few concentrated forces. The concentrated forces are considered to be equivalent to aerodynamic forces. The equivalence is carried out using surface spline method and principle of virtual work. The structural modeling of the plate is based on the classical plate theory and the aerodynamic modeling is based on the piston theory. The present approach differs from the linear panel flutter analysis in scheming the modal aerodynamics forces with unchanged structural properties. The solutions for the flutter problem are obtained numerically using the standard eigenvalue procedure. A few concentrated forces were considered with an optimization effort to decide their optimal locations. The optimization process is based on minimizing the error between the flutter bounds from emulated and linear flutter analysis method. The emulated flutter results for the square plate of four different boundary conditions using six concentrated forces are obtained with minimal error to the reference value. The results demonstrated the workability and viability of using concentrated forces in emulating real panel flutter. In addition, the paper includes the parametric studies of linear panel flutter whose proper literatures are not available.
Parallel But Not Equivalent: Challenges and Solutions for Repeated Assessment of Cognition over Time
Gross, Alden L.; Inouye, Sharon K.; Rebok, George W.; Brandt, Jason; Crane, Paul K.; Parisi, Jeanine M.; Tommet, Doug; Bandeen-Roche, Karen; Carlson, Michelle C.; Jones, Richard N.
2013-01-01
Objective Analyses of individual differences in change may be unintentionally biased when versions of a neuropsychological test used at different follow-ups are not of equivalent difficulty. This study’s objective was to compare mean, linear, and equipercentile equating methods and demonstrate their utility in longitudinal research. Study Design and Setting The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE, N=1,401) study is a longitudinal randomized trial of cognitive training. The Alzheimer’s Disease Neuroimaging Initiative (ADNI, n=819) is an observational cohort study. Nonequivalent alternate versions of the Auditory Verbal Learning Test (AVLT) were administered in both studies. Results Using visual displays, raw and mean-equated AVLT scores in both studies showed obvious nonlinear trajectories in reference groups that should show minimal change, poor equivalence over time (ps≤0.001), and raw scores demonstrated poor fits in models of within-person change (RMSEAs>0.12). Linear and equipercentile equating produced more similar means in reference groups (ps≥0.09) and performed better in growth models (RMSEAs<0.05). Conclusion Equipercentile equating is the preferred equating method because it accommodates tests more difficult than a reference test at different percentiles of performance and performs well in models of within-person trajectory. The method has broad applications in both clinical and research settings to enhance the ability to use nonequivalent test forms. PMID:22540849
Wang, Y; Lin, D; Fu, T
1997-03-01
Morphology of inorganic material powders before and after being treated by ultrafine crush was observed by transformite electron microscope. The length and diameter of granules were measured. Polymers inorganic material powders before and after being treated by ultrafine crush were used for preparing radiological equivalent materials. Blending compatibility of inorganic meterials with polymer materials was observed by scanning electron microscope. CT values of tissue equivalent materials were measured by X-ray CT. Distribution of inorganic materials was examined. The compactness of materials was determined by the water absorbed method. The elastic module of materials was measured by laser speckle interferementry method. The results showed that the inorganic material powders treated by the ultrafine crush blent well with polymer and the distribution of these powders in the polymer was homogeneous. The equivalent errors of linear attenuation coefficients and CT values of equivalent materials were small. Their elastic modules increased one order of magnitude from 6.028 x 10(2) kg/cm2 to 9.753 x 10(3) kg/cm2. In addition, the rod inorganic material powders having rod granule blent easily with polymer. The present study provides a theoretical guidance and experimental basis for the design and synthesis of radiological equivalent materials.
The effect of a paraffin screen on the neutron dose at the maze door of a 15 MV linear accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krmar, M.; Kuzmanović, A.; Nikolić, D.
2013-08-15
Purpose: The purpose of this study was to explore the effects of a paraffin screen located at various positions in the maze on the neutron dose equivalent at the maze door.Methods: The neutron dose equivalent was measured at the maze door of a room containing a 15 MV linear accelerator for x-ray therapy. Measurements were performed for several positions of the paraffin screen covering only 27.5% of the cross-sectional area of the maze. The neutron dose equivalent was also measured at all screen positions. Two simple models of the neutron source were considered in which the first assumed that themore » source was the cross-sectional area at the inner entrance of the maze, radiating neutrons in an isotropic manner. In the second model the reduction in the neutron dose equivalent at the maze door due to the paraffin screen was considered to be a function of the mean values of the neutron fluence and energy at the screen.Results: The results of this study indicate that the equivalent dose at the maze door was reduced by a factor of 3 through the use of a paraffin screen that was placed inside the maze. It was also determined that the contributions to the dosage from areas that were not covered by the paraffin screen as viewed from the dosimeter, were 2.5 times higher than the contributions from the covered areas. This study also concluded that the contributions of the maze walls, ceiling, and floor to the total neutron dose equivalent were an order of magnitude lower than those from the surface at the far end of the maze.Conclusions: This study demonstrated that a paraffin screen could be used to reduce the neutron dose equivalent at the maze door by a factor of 3. This paper also found that the reduction of the neutron dose equivalent was a linear function of the area covered by the maze screen and that the decrease in the dose at the maze door could be modeled as an exponential function of the product φ·E at the screen.« less
N-person differential games. Part 1: Duality-finite element methods
NASA Technical Reports Server (NTRS)
Chen, G.; Zheng, Q.
1983-01-01
The duality approach, which is motivated by computational needs and is done by introducing N + 1 Language multipliers is addressed. For N-person linear quadratic games, the primal min-max problem is shown to be equivalent to the dual min-max problem.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Anirban; Ganguly, Anindita; Chatterjee, Saumya Deep
2018-04-01
In this paper the authors have dealt with seven kinds of non-linear Volterra and Fredholm classes of equations. The authors have formulated an algorithm for solving the aforementioned equation types via Hybrid Function (HF) and Triangular Function (TF) piecewise-linear orthogonal approach. In this approach the authors have reduced integral equation or integro-differential equation into equivalent system of simultaneous non-linear equation and have employed either Newton's method or Broyden's method to solve the simultaneous non-linear equations. The authors have calculated the L2-norm error and the max-norm error for both HF and TF method for each kind of equations. Through the illustrated examples, the authors have shown that the HF based algorithm produces stable result, on the contrary TF-computational method yields either stable, anomalous or unstable results.
2011-01-01
Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199
NASA Astrophysics Data System (ADS)
German, Brian Joseph
This research develops a technique for the solution of incompressible equivalents to planar steady subsonic potential flows. Riemannian geometric formalism is used to develop a gauge transformation of the length measure followed by a curvilinear coordinate transformation to map the given subsonic flow into a canonical Laplacian flow with the same boundary conditions. The effect of the transformation is to distort both the immersed profile shape and the domain interior nonuniformly as a function of local flow properties. The method represents the full nonlinear generalization of the classical methods of Prandtl-Glauert and Karman-Tsien. Unlike the classical methods which are "corrections," this method gives exact results in the sense that the inverse mapping produces the subsonic full potential solution over the original airfoil, up to numerical accuracy. The motivation for this research was provided by an observed analogy between linear potential flow and the special theory of relativity that emerges from the invariance of the d'Alembert wave equation under Lorentz transformations. This analogy is well known in an operational sense, being leveraged widely in linear unsteady aerodynamics and acoustics, stemming largely from the work of Kussner. Whereas elements of the special theory can be invoked for compressibility effects that are linear and global in nature, the question posed in this work was whether other mathematical techniques from the realm of relativity theory could be used to similar advantage for effects that are nonlinear and local. This line of thought led to a transformation leveraging Riemannian geometric methods common to the general theory of relativity. A gauge transformation is used to geometrize compressibility through the metric tensor of the underlying space to produce an equivalent incompressible flow that lives not on a plane but on a curved surface. In this sense, forces owing to compressibility can be ascribed to the geometry of space in much the same way that general relativity ascribes gravitational forces to the curvature of space-time. Although the analogy with general relativity is fruitful, it is important not to overstate the similarities between compressibility and the physics of gravity, as the interest for this thesis is primarily in the mathematical framework and not physical phenomenology or epistemology. The thesis presents the philosophy and theory for the transformation method followed by a numerical method for practical solutions of equivalent incompressible flows over arbitrary closed profiles. The numerical method employs an iterative approach involving the solution of the equivalent incompressible flow with a panel method, the calculation of the metric tensor for the gauge transformation, and the solution of the curvilinear coordinate mapping to the canonical flow with a finite difference approach for the elliptic boundary value problem. This method is demonstrated for non-circulatory flow over a circular cylinder and both symmetric and lifting flows over a NACA 0012 profile. Results are validated with accepted subcritical full potential test cases available in the literature. For chord-preserving mapping boundary conditions, the results indicate that the equivalent incompressible profiles thicken with Mach number and develop a leading edge droop with increased angle of attack. Two promising areas of potential applicability of the method have been identified. The first is in airfoil inverse design methods leveraging incompressible flow knowledge including heuristics and empirical data for the potential field effects on viscous phenomena such as boundary layer transition and separation. The second is in aerodynamic testing using distorted similarity-scaled models.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Goetz, Alexander F. H.
1992-01-01
Over the last decade, technological advances in airborne imaging spectrometers, having spectral resolution comparable with laboratory spectrometers, have made it possible to estimate biochemical constituents of vegetation canopies. Wessman estimated lignin concentration from data acquired with NASA's Airborne Imaging Spectrometer (AIS) over Blackhawk Island in Wisconsin. A stepwise linear regression technique was used to determine the single spectral channel or channels in the AIS data that best correlated with measured lignin contents using chemical methods. The regression technique does not take advantage of the spectral shape of the lignin reflectance feature as a diagnostic tool nor the increased discrimination among other leaf components with overlapping spectral features. A nonlinear least squares spectral matching technique was recently reported for deriving both the equivalent water thicknesses of surface vegetation and the amounts of water vapor in the atmosphere from contiguous spectra measured with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The same technique was applied to a laboratory reflectance spectrum of fresh, green leaves. The result demonstrates that the fresh leaf spectrum in the 1.0-2.5 microns region consists of spectral components of dry leaves and the spectral component of liquid water. A linear least squares spectral matching technique for retrieving equivalent water thickness and biochemical components of green vegetation is described.
Multigrid methods in structural mechanics
NASA Technical Reports Server (NTRS)
Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.
1986-01-01
Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.
Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad
2016-02-01
In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stressmore » and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.« less
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Vitello, Dominic J.; Ripper, Richard M.; Fettiplace, Michael R.; Weinberg, Guy L.; Vitello, Joseph M.
2015-01-01
Purpose. The gravimetric method of weighing surgical sponges is used to quantify intraoperative blood loss. The dry mass minus the wet mass of the gauze equals the volume of blood lost. This method assumes that the density of blood is equivalent to water (1 gm/mL). This study's purpose was to validate the assumption that the density of blood is equivalent to water and to correlate density with hematocrit. Methods. 50 µL of whole blood was weighed from eighteen rats. A distilled water control was weighed for each blood sample. The averages of the blood and water were compared utilizing a Student's unpaired, one-tailed t-test. The masses of the blood samples and the hematocrits were compared using a linear regression. Results. The average mass of the eighteen blood samples was 0.0489 g and that of the distilled water controls was 0.0492 g. The t-test showed P = 0.2269 and R 2 = 0.03154. The hematocrit values ranged from 24% to 48%. The linear regression R 2 value was 0.1767. Conclusions. The R 2 value comparing the blood and distilled water masses suggests high correlation between the two populations. Linear regression showed the hematocrit was not proportional to the mass of the blood. The study confirmed that the measured density of blood is similar to water. PMID:26464949
Method of Conjugate Radii for Solving Linear and Nonlinear Systems
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1999-01-01
This paper describes a method to solve a system of N linear equations in N steps. A quadratic form is developed involving the sum of the squares of the residuals of the equations. Equating the quadratic form to a constant yields a surface which is an ellipsoid. For different constants, a family of similar ellipsoids can be generated. Starting at an arbitrary point an orthogonal basis is constructed and the center of the family of similar ellipsoids is found in this basis by a sequence of projections. The coordinates of the center in this basis are the solution of linear system of equations. A quadratic form in N variables requires N projections. That is, the current method is an exact method. It is shown that the sequence of projections is equivalent to a special case of the Gram-Schmidt orthogonalization process. The current method enjoys an advantage not shared by the classic Method of Conjugate Gradients. The current method can be extended to nonlinear systems without modification. For nonlinear equations the Method of Conjugate Gradients has to be augmented with a line-search procedure. Results for linear and nonlinear problems are presented.
Nonlinear Principal Components Analysis: Introduction and Application
ERIC Educational Resources Information Center
Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.
2007-01-01
The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…
Analyses of Multishaft Rotor-Bearing Response
NASA Technical Reports Server (NTRS)
Nelson, H. D.; Meacham, W. L.
1985-01-01
Method works for linear and nonlinear systems. Finite-element-based computer program developed to analyze free and forced response of multishaft rotor-bearing systems. Acronym, ARDS, denotes Analysis of Rotor Dynamic Systems. Systems with nonlinear interconnection or support bearings or both analyzed by numerically integrating reduced set of coupledsystem equations. Linear systems analyzed in closed form for steady excitations and treated as equivalent to nonlinear systems for transient excitation. ARDS is FORTRAN program developed on an Amdahl 470 (similar to IBM 370).
NASA Astrophysics Data System (ADS)
Brambilla, A.; Gorecki, A.; Potop, A.; Paulus, C.; Verger, L.
2017-08-01
Energy sensitive photon counting X-ray detectors provide energy dependent information which can be exploited for material identification. The attenuation of an X-ray beam as a function of energy depends on the effective atomic number Zeff and the density. However, the measured attenuation is degraded by the imperfections of the detector response such as charge sharing or pile-up. These imperfections lead to non-linearities that limit the benefits of energy resolved imaging. This work aims to implement a basis material decomposition method which overcomes these problems. Basis material decomposition is based on the fact that the attenuation of any material or complex object can be accurately reproduced by a combination of equivalent thicknesses of basis materials. Our method is based on a calibration phase to learn the response of the detector for different combinations of thicknesses of the basis materials. The decomposition algorithm finds the thicknesses of basis material whose spectrum is closest to the measurement, using a maximum likelihood criterion assuming a Poisson law distribution of photon counts for each energy bin. The method was used with a ME100 linear array spectrometric X-ray imager to decompose different plastic materials on a Polyethylene and Polyvinyl Chloride base. The resulting equivalent thicknesses were used to estimate the effective atomic number Zeff. The results are in good agreement with the theoretical Zeff, regardless of the plastic sample thickness. The linear behaviour of the equivalent lengths makes it possible to process overlapped materials. Moreover, the method was tested with a 3 materials base by adding gadolinium, whose K-edge is not taken into account by the other two materials. The proposed method has the advantage that it can be used with any number of energy channels, taking full advantage of the high energy resolution of the ME100 detector. Although in principle two channels are sufficient, experimental measurements show that the use of a high number of channels significantly improves the accuracy of decomposition by reducing noise and systematic bias.
Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops
NASA Technical Reports Server (NTRS)
Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram
2017-01-01
The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.
Cardinal Equivalence of Small Number in Young Children.
ERIC Educational Resources Information Center
Kingma, J.; Roelinga, U.
1982-01-01
Children completed three types of equivalent cardination tasks which assessed the influence of different stimulus configurations (linear, linear-nonlinear, and nonlinear), and density of object spacing. Prior results reported by Siegel, Brainerd, and Gelman and Gallistel were not replicated. Implications for understanding cardination concept…
Computation of nonlinear ultrasound fields using a linearized contrast source method.
Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A
2013-08-01
Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.
NASA Astrophysics Data System (ADS)
Torres Cedillo, Sergio G.; Bonello, Philip
2016-01-01
The high pressure (HP) rotor in an aero-engine assembly cannot be accessed under operational conditions because of the restricted space for instrumentation and high temperatures. This motivates the development of a non-invasive inverse problem approach for unbalance identification and balancing, requiring prior knowledge of the structure. Most such methods in the literature necessitate linear bearing models, making them unsuitable for aero-engine applications which use nonlinear squeeze-film damper (SFD) bearings. A previously proposed inverse method for nonlinear rotating systems was highly limited in its application (e.g. assumed circular centered SFD orbits). The methodology proposed in this paper overcomes such limitations. It uses the Receptance Harmonic Balance Method (RHBM) to generate the backward operator using measurements of the vibration at the engine casing, provided there is at least one linear connection between rotor and casing, apart from the nonlinear connections. A least-squares solution yields the equivalent unbalance distribution in prescribed planes of the rotor, which is consequently used to balance it. The method is validated on distinct rotordynamic systems using simulated casing vibration readings. The method is shown to provide effective balancing under hitherto unconsidered practical conditions. The repeatability of the method, as well as its robustness to noise, model uncertainty and balancing errors, are satisfactorily demonstrated and the limitations of the process discussed.
NASA Astrophysics Data System (ADS)
McDonald, Michael C.; Kim, H. K.; Henry, J. R.; Cunningham, I. A.
2012-03-01
The detective quantum efficiency (DQE) is widely accepted as a primary measure of x-ray detector performance in the scientific community. A standard method for measuring the DQE, based on IEC 62220-1, requires the system to have a linear response meaning that the detector output signals are proportional to the incident x-ray exposure. However, many systems have a non-linear response due to characteristics of the detector, or post processing of the detector signals, that cannot be disabled and may involve unknown algorithms considered proprietary by the manufacturer. For these reasons, the DQE has not been considered as a practical candidate for routine quality assurance testing in a clinical setting. In this article we described a method that can be used to measure the DQE of both linear and non-linear systems that employ only linear image processing algorithms. The method was validated on a Cesium Iodide based flat panel system that simultaneously stores a raw (linear) and processed (non-linear) image for each exposure. It was found that the resulting DQE was equivalent to a conventional standards-compliant DQE with measurement precision, and the gray-scale inversion and linear edge enhancement did not affect the DQE result. While not IEC 62220-1 compliant, it may be adequate for QA programs.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Linear network representation of multistate models of transport.
Sandblom, J; Ring, A; Eisenman, G
1982-01-01
By introducing external driving forces in rate-theory models of transport we show how the Eyring rate equations can be transformed into Ohm's law with potentials that obey Kirchhoff's second law. From such a formalism the state diagram of a multioccupancy multicomponent system can be directly converted into linear network with resistors connecting nodal (branch) points and with capacitances connecting each nodal point with a reference point. The external forces appear as emf or current generators in the network. This theory allows the algebraic methods of linear network theory to be used in solving the flux equations for multistate models and is particularly useful for making proper simplifying approximation in models of complex membrane structure. Some general properties of linear network representation are also deduced. It is shown, for instance, that Maxwell's reciprocity relationships of linear networks lead directly to Onsager's relationships in the near equilibrium region. Finally, as an example of the procedure, the equivalent circuit method is used to solve the equations for a few transport models. PMID:7093425
A method for the analysis of nonlinearities in aircraft dynamic response to atmospheric turbulence
NASA Technical Reports Server (NTRS)
Sidwell, K.
1976-01-01
An analytical method is developed which combines the equivalent linearization technique for the analysis of the response of nonlinear dynamic systems with the amplitude modulated random process (Press model) for atmospheric turbulence. The method is initially applied to a bilinear spring system. The analysis of the response shows good agreement with exact results obtained by the Fokker-Planck equation. The method is then applied to an example of control-surface displacement limiting in an aircraft with a pitch-hold autopilot.
Hartzell, S.; Leeds, A.; Frankel, A.; Williams, R.A.; Odum, J.; Stephenson, W.; Silva, W.
2002-01-01
The Seattle fault poses a significant seismic hazard to the city of Seattle, Washington. A hybrid, low-frequency, high-frequency method is used to calculate broadband (0-20 Hz) ground-motion time histories for a M 6.5 earthquake on the Seattle fault. Low frequencies (1 Hz) are calculated by a stochastic method that uses a fractal subevent size distribution to give an ω-2 displacement spectrum. Time histories are calculated for a grid of stations and then corrected for the local site response using a classification scheme based on the surficial geology. Average shear-wave velocity profiles are developed for six surficial geologic units: artificial fill, modified land, Esperance sand, Lawton clay, till, and Tertiary sandstone. These profiles together with other soil parameters are used to compare linear, equivalent-linear, and nonlinear predictions of ground motion in the frequency band 0-15 Hz. Linear site-response corrections are found to yield unreasonably large ground motions. Equivalent-linear and nonlinear calculations give peak values similar to the 1994 Northridge, California, earthquake and those predicted by regression relationships. Ground-motion variance is estimated for (1) randomization of the velocity profiles, (2) variation in source parameters, and (3) choice of nonlinear model. Within the limits of the models tested, the results are found to be most sensitive to the nonlinear model and soil parameters, notably the over consolidation ratio.
Weighted least squares phase unwrapping based on the wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia
2007-01-01
The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.
Fillet Weld Stress Using Finite Element Methods
NASA Technical Reports Server (NTRS)
Lehnhoff, T. F.; Green, G. W.
1985-01-01
Average elastic Von Mises equivalent stresses were calculated along the throat of a single lap fillet weld. The average elastic stresses were compared to initial yield and to plastic instability conditions to modify conventional design formulas is presented. The factor is a linear function of the thicknesses of the parent plates attached by the fillet weld.
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2017-01-01
In the fourth installment of the celebrated series of five papers entitled "Towards the ultimate conservative difference scheme", Van Leer (1977) introduced five schemes for advection, the first three are piecewise linear, and the last two, piecewise parabolic. Among the five, scheme I, which is the least accurate, extends with relative ease to systems of equations in multiple dimensions. As a result, it became the most popular and is widely known as the MUSCL scheme (monotone upstream-centered schemes for conservation laws). Schemes III and V have the same accuracy, are the most accurate, and are closely related to current high-order methods. Scheme III uses a piecewise linear approximation that is discontinuous across cells, and can be considered as a precursor of the discontinuous Galerkin methods. Scheme V employs a piecewise quadratic approximation that is, as opposed to the case of scheme III, continuous across cells. This method is the basis for the on-going "active flux scheme" developed by Roe and collaborators. Here, schemes III and V are shown to be equivalent in the sense that they yield identical (reconstructed) solutions, provided the initial condition for scheme III is defined from that of scheme V in a manner dependent on the CFL number. This equivalence is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The finding also shows a key connection between the approaches of discontinuous and continuous polynomial approximations. In addition to the discussed equivalence, a framework using both projection and interpolation that extends schemes III and V into a single family of high-order schemes is introduced. For these high-order extensions, it is demonstrated via Fourier analysis that schemes with the same number of degrees of freedom ?? per cell, in spite of the different piecewise polynomial degrees, share the same sets of eigenvalues and thus, have the same stability and accuracy. Moreover, these schemes are accurate to order 2??-1, which is higher than the expected order of ??.
Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia
2018-01-01
Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.
Micromechanical analysis on anisotropy of structured magneto-rheological elastomer
NASA Astrophysics Data System (ADS)
Li, R.; Zhang, Z.; Chen, S. W.; Wang, X. J.
2015-07-01
This paper investigates the equivalent elastic modulus of structured magneto-rheological elastomer (MRE) in the absence of magnetic field. We assume that both matrix and ferromagnetic particles are linear elastic materials, and ferromagnetic particles are embedded in matrix with layer-like structure. The structured composite could be divided into matrix layer and reinforced layer, in which the reinforced layer is composed of matrix and the homogenously distributed ferromagnetic particles in matrix. The equivalent elastic modulus of reinforced layer is analysed by the Mori-Tanaka method. Finite Element Method (FEM) is also carried out to illustrate the relationship between the elastic modulus and the volume fraction of ferromagnetic particles. The results show that the anisotropy of elastic modulus becomes noticeable, as the volume fraction of particles increases.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2015-12-01
Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.
Minimally invasive estimation of ventricular dead space volume through use of Frank-Starling curves.
Davidson, Shaun; Pretty, Chris; Pironet, Antoine; Desaive, Thomas; Janssen, Nathalie; Lambermont, Bernard; Morimont, Philippe; Chase, J Geoffrey
2017-01-01
This paper develops a means of more easily and less invasively estimating ventricular dead space volume (Vd), an important, but difficult to measure physiological parameter. Vd represents a subject and condition dependent portion of measured ventricular volume that is not actively participating in ventricular function. It is employed in models based on the time varying elastance concept, which see widespread use in haemodynamic studies, and may have direct diagnostic use. The proposed method involves linear extrapolation of a Frank-Starling curve (stroke volume vs end-diastolic volume) and its end-systolic equivalent (stroke volume vs end-systolic volume), developed across normal clinical procedures such as recruitment manoeuvres, to their point of intersection with the y-axis (where stroke volume is 0) to determine Vd. To demonstrate the broad applicability of the method, it was validated across a cohort of six sedated and anaesthetised male Pietrain pigs, encompassing a variety of cardiac states from healthy baseline behaviour to circulatory failure due to septic shock induced by endotoxin infusion. Linear extrapolation of the curves was supported by strong linear correlation coefficients of R = 0.78 and R = 0.80 average for pre- and post- endotoxin infusion respectively, as well as good agreement between the two linearly extrapolated y-intercepts (Vd) for each subject (no more than 7.8% variation). Method validity was further supported by the physiologically reasonable Vd values produced, equivalent to 44.3-53.1% and 49.3-82.6% of baseline end-systolic volume before and after endotoxin infusion respectively. This method has the potential to allow Vd to be estimated without a particularly demanding, specialised protocol in an experimental environment. Further, due to the common use of both mechanical ventilation and recruitment manoeuvres in intensive care, this method, subject to the availability of multi-beat echocardiography, has the potential to allow for estimation of Vd in a clinical environment.
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
Cotton-type and joint invariants for linear elliptic systems.
Aslam, A; Mahomed, F M
2013-01-01
Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results.
Cotton-Type and Joint Invariants for Linear Elliptic Systems
Aslam, A.; Mahomed, F. M.
2013-01-01
Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results. PMID:24453871
Study on static and dynamic characteristics of moving magnet linear compressors
NASA Astrophysics Data System (ADS)
Chen, N.; Tang, Y. J.; Wu, Y. N.; Chen, X.; Xu, L.
2007-09-01
With the development of high-strength NdFeB magnetic material, moving magnet linear compressors have been gradually introduced in the fields of refrigeration and cryogenic engineering, especially in Stirling and pulse tube cryocoolers. This paper presents simulation and experimental investigations on the static and dynamic characteristics of a moving magnet linear motor and a moving magnet linear compressor. Both equivalent magnetic circuits and finite element approaches have been used to model the moving magnet linear motor. Subsequently, the force and equilibrium characteristics of the linear motor have been predicted and verified by detailed static experimental analyses. In combination with a harmonic analysis, experimental investigations were conducted on a prototype of a moving magnet linear compressor. A voltage-stroke relationship, the effect of charging pressure on the performance and dynamic frequency response characteristics are investigated. Finally, the method to identify optimal points of the linear compressor has been described, which is indispensable to the design and operation of moving magnet linear compressors.
Kernel PLS-SVC for Linear and Nonlinear Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan
2003-01-01
A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.
Method of Preparing Polymers with Low Melt Viscosity
NASA Technical Reports Server (NTRS)
Jensen, Brian J. (Inventor)
2001-01-01
This invention is an improvement in standard polymerizations procedures, i.e., addition-type and step-growth type polymerizations, wherein monomers are reacted to form a growing polymer chain. The improvement includes employing an effective amount of a trifunctional monomer (such as a trifunctional amine anhydride, or phenol) in the polymerization procedure to form a mixture of polymeric materials consisting of branced polymers, star-shaped polymers, and linear polymers. This mixture of polymeric materials has a lower melt temperature and a lower melt viscosity than corresponding linear polymeric materials of equivalent molecular weight.
Conformal array design on arbitrary polygon surface with transformation optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Li, E-mail: dengl@bupt.edu.cn; Hong, Weijun, E-mail: hongwj@bupt.edu.cn; Zhu, Jianfeng
2016-06-15
A transformation-optics based method to design a conformal antenna array on an arbitrary polygon surface is proposed and demonstrated in this paper. This conformal antenna array can be adjusted to behave equivalently as a uniformly spaced linear array by applying an appropriate transformation medium. An typical example of general arbitrary polygon conformal arrays, not limited to circular array, is presented, verifying the proposed approach. In summary, the novel arbitrary polygon surface conformal array can be utilized in array synthesis and beam-forming, maintaining all benefits of linear array.
NASA Astrophysics Data System (ADS)
Zhao, Yanlin; Yao, Jun; Wang, Mi
2016-07-01
On-line monitoring of crystal size in the crystallization process is crucial to many pharmaceutical and fine-chemical industrial applications. In this paper, a novel method is proposed for the on-line monitoring of the cooling crystallization process of L-glutamic acid (LGA) using electrical impedance spectroscopy (EIS). The EIS method can be used to monitor the growth of crystal particles relying on the presence of an electrical double layer on the charged particle surface and the polarization of double layer under the excitation of alternating electrical field. The electrical impedance spectra and crystal size were measured on-line simultaneously by an impedance analyzer and focused beam reflectance measurement (FBRM), respectively. The impedance spectra were analyzed using the equivalent circuit model and the equivalent circuit elements in the model can be obtained by fitting the experimental data. Two equivalent circuit elements, including capacitance (C 2) and resistance (R 2) from the dielectric polarization of the LGA solution and crystal particle/solution interface, are in relation with the crystal size. The mathematical relationship between the crystal size and the equivalent circuit elements can be obtained by a non-linear fitting method. The function can be used to predict the change of crystal size during the crystallization process.
Architecture for one-shot compressive imaging using computer-generated holograms.
Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D
2016-09-10
We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru
2015-10-28
An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less
NASA Astrophysics Data System (ADS)
Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Makhno, D. E.; Fedotov, K. V.
2018-03-01
The paper aims to analyze vibrations of the dynamic system equivalent of the suspension system with regard to tyre ability to smooth road irregularities. The research is based on static dynamics for linear systems of automated control, methods of correlation, spectral and numerical analysis. Input of new data on the smoothing effect of the pneumatic tyre reflecting changes of a contact area between the wheel and road under vibrations of the suspension makes the system non-linear which requires using numerical analysis methods. Taking into account the variable smoothing ability of the tyre when calculating suspension vibrations, one can approximate calculation and experimental results and improve the constant smoothing ability of the tyre.
Recent Progress in the p and h-p Version of the Finite Element Method.
1987-07-01
code PROBE which was developed recently by NOETIC Technologies, St. Louis £54]. PROBE solves two dimensional problems of linear elasticity, stationary...of the finite element method was studied in detail from various point of view. We will mention here some essential illustrative results. In one...28) Bathe, K. J., Brezzi, F., Studies of finite element procedures - the INF-SUP condition, equivalent forms and applications in Reliability of
NASA Astrophysics Data System (ADS)
Domnisoru, L.; Modiga, A.; Gasparotti, C.
2016-08-01
At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2017-08-01
Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.
Modification of the USLE K factor for soil erodibility assessment on calcareous soils in Iran
NASA Astrophysics Data System (ADS)
Ostovari, Yaser; Ghorbani-Dashtaki, Shoja; Bahrami, Hossein-Ali; Naderi, Mehdi; Dematte, Jose Alexandre M.; Kerry, Ruth
2016-11-01
The measurement of soil erodibility (K) in the field is tedious, time-consuming and expensive; therefore, its prediction through pedotransfer functions (PTFs) could be far less costly and time-consuming. The aim of this study was to develop new PTFs to estimate the K factor using multiple linear regression, Mamdani fuzzy inference systems, and artificial neural networks. For this purpose, K was measured in 40 erosion plots with natural rainfall. Various soil properties including the soil particle size distribution, calcium carbonate equivalent, organic matter, permeability, and wet-aggregate stability were measured. The results showed that the mean measured K was 0.014 t h MJ- 1 mm- 1 and 2.08 times less than the estimated mean K (0.030 t h MJ- 1 mm- 1) using the USLE model. Permeability, wet-aggregate stability, very fine sand, and calcium carbonate were selected as independent variables by forward stepwise regression in order to assess the ability of multiple linear regression, Mamdani fuzzy inference systems and artificial neural networks to predict K. The calcium carbonate equivalent, which is not accounted for in the USLE model, had a significant impact on K in multiple linear regression due to its strong influence on the stability of aggregates and soil permeability. Statistical indices in validation and calibration datasets determined that the artificial neural networks method with the highest R2, lowest RMSE, and lowest ME was the best model for estimating the K factor. A strong correlation (R2 = 0.81, n = 40, p < 0.05) between the estimated K from multiple linear regression and measured K indicates that the use of calcium carbonate equivalent as a predictor variable gives a better estimation of K in areas with calcareous soils.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
NASA Technical Reports Server (NTRS)
Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.
1959-01-01
A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.
NASA Astrophysics Data System (ADS)
Shibata, Akenori; Masuno, Hidemasa
2017-10-01
An eleven-story RC apartment building suffered medium damage in the 2011 East Japan earthquake and was retrofitted for re-use. Strong motion records were obtained near the building. This paper discusses the inelastic earthquake response analysis of the building using the equivalent single-degree-of-freedom (1-DOF) system to account for the features of damage. The method of converting the building frame into 1-DOF system with tri-linear reducing-stiffness restoring force characteristics was given. The inelastic response analysis of the building against the earthquake using the inelastic 1-DOF equivalent system could interpret well the level of actual damage.
Bravo, Isabel; Pirraco, Rui
2011-01-01
Purpose The purpose of this work was the biological comparison between Low Dose Rate (LDR) and Pulsed Dose Rate (PDR) in cervical cancer regarding the discontinuation of the afterloading system used for the LDR treatments at our Institution since December 2009. Material and methods In the first phase we studied the influence of the pulse dose and the pulse time in the biological equivalence between LDR and PDR treatments using the Linear Quadratic Model (LQM). In the second phase, the equivalent dose in 2 Gy/fraction (EQD2) for the tumor, rectum and bladder in treatments performed with both techniques was evaluated and statistically compared. All evaluated patients had stage IIB cervical cancer and were treated with External Beam Radiotherapy (EBRT) plus two Brachytherapy (BT) applications. Data were collected from 48 patients (26 patients treated with LDR and 22 patients with PDR). Results In the analyses of the influence of PDR parameters in the biological equivalence between LDR and PDR treatments (Phase 1), it was calculated that if the pulse dose in PDR was kept equal to the LDR dose rate, a small the-rapeutic loss was expected. If the pulse dose was decreased, the therapeutic window became larger, but a correction in the prescribed dose was necessary. In PDR schemes with 1 hour interval between pulses, the pulse time did not influence significantly the equivalent dose. In the comparison between the groups treated with LDR and PDR (Phase 2) we concluded that they were not equivalent, because in the PDR group the total EQD2 for the tumor, rectum and bladder was smaller than in the LDR group; the LQM estimated that a correction in the prescribed dose of 6% to 10% was ne-cessary to avoid therapeutic loss. Conclusions A correction in the prescribed dose was necessary; this correction should be achieved by calculating the PDR dose equivalent to the desired LDR total dose. PMID:23346123
Development of a Thiolysis HPLC Method for the Analysis of Procyanidins in Cranberry Products.
Gao, Chi; Cunningham, David G; Liu, Haiyan; Khoo, Christina; Gu, Liwei
2018-03-07
The objective of this study was to develop a thiolysis HPLC method to quantify total procyanidins, the ratio of A-type linkages, and A-type procyanidin equivalents in cranberry products. Cysteamine was utilized as a low-odor substitute of toluene-α-thiol for thiolysis depolymerization. A reaction temperature of 70 °C and reaction time of 20 min, in 0.3 M of HCl, were determined to be optimum depolymerization conditions. Thiolytic products of cranberry procyanidins were separated by RP-HPLC and identified using high-resolution mass spectrometry. Standards curves of good linearity were obtained on thiolyzed procyanidin dimer A2 and B2 external standards. The detection and quantification limits, recovery, and precision of this method were validated. The new method was applied to quantitate total procyanidins, average degree of polymerization, ratio of A-type linkages, and A-type procyanidin equivalents in cranberry products. Results showed that the method was suitable for quantitative and qualitative analysis of procyanidins in cranberry products.
NASA Technical Reports Server (NTRS)
Baker, A. J.
1974-01-01
The finite-element method is used to establish a numerical solution algorithm for the Navier-Stokes equations for two-dimensional flows of a viscous compressible fluid. Numerical experiments confirm the advection property for the finite-element equivalent of the nonlinear convection term for both unidirectional and recirculating flowfields. For linear functionals, the algorithm demonstrates good accuracy using coarse discretizations and h squared convergence with discretization refinement.
The evaluation of the neutron dose equivalent in the two-bend maze.
Tóth, Á Á; Petrović, B; Jovančević, N; Krmar, M; Rutonjski, L; Čudić, O
2017-04-01
The purpose of this study was to explore the effect of the second bend of the maze, on the neutron dose equivalent, in the 15MV linear accelerator vault, with two bend maze. These two bends of the maze were covered by 32 points where the neutron dose equivalent was measured. There is one available method for estimation of the neutron dose equivalent at the entrance door of the two bend maze which was tested using the results of the measurements. The results of this study show that the neutron equivalent dose at the door of the two bend maze was reduced almost three orders of magnitude. The measured TVD in the first bend (closer to the inner maze entrance) is about 5m. The measured TVD result is close to the TVD values usually used in the proposed models for estimation of neutron dose equivalent at the entrance door of the single bend maze. The results also determined that the TVD in the second bend (next to the maze entrance door) is significantly lower than the TVD values found in the first maze bend. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro
2016-10-01
This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics
Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less
Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A
2016-08-01
The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data basis for multivariate analysis methods, equivalent to data resulting from chromatographic separations. The alternative evaluation of very large data series based on linear regression analysis produced information equivalent to results obtained through application of PCA an CA. Copyright © 2016 Elsevier B.V. All rights reserved.
On non-autonomous dynamical systems
NASA Astrophysics Data System (ADS)
Anzaldo-Meneses, A.
2015-04-01
In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.
Moment method analysis of linearly tapered slot antennas
NASA Technical Reports Server (NTRS)
Koeksal, Adnan
1993-01-01
A method of moments (MOM) model for the analysis of the Linearly Tapered Slot Antenna (LTSA) is developed and implemented. The model employs an unequal size rectangular sectioning for conducting parts of the antenna. Piecewise sinusoidal basis functions are used for the expansion of conductor current. The effect of the dielectric is incorporated in the model by using equivalent volume polarization current density and solving the equivalent problem in free-space. The feed section of the antenna including the microstripline is handled rigorously in the MOM model by including slotline short circuit and microstripline currents among the unknowns. Comparison with measurements is made to demonstrate the validity of the model for both the air case and the dielectric case. Validity of the model is also verified by extending the model to handle the analysis of the skew-plate antenna and comparing the results to those of a skew-segmentation modeling results of the same structure and to available data in the literature. Variation of the radiation pattern for the air LTSA with length, height, and taper angle is investigated, and the results are tabulated. Numerical results for the effect of the dielectric thickness and permittivity are presented.
NASA Astrophysics Data System (ADS)
Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat
2018-07-01
In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.
Capelli bitableaux and Z-forms of general linear Lie superalgebras.
Brini, A; Teolis, A G
1990-01-01
The combinatorics of the enveloping algebra UQ(pl(L)) of the general linear Lie superalgebra of a finite dimensional Z2-graded Q-vector space is studied. Three non-equivalent Z-forms of UQ(pl(L)) are introduced: one of these Z-forms is a version of the Kostant Z-form and the others are Lie algebra analogs of Rota and Stein's straightening formulae for the supersymmetric algebra Super[L P] and for its dual Super[L* P*]. The method is based on an extension of Capelli's technique of variabili ausiliarie to algebras containing positively and negatively signed elements. PMID:11607048
Molecular electronics in pinnae of Mimosa pudica
Foster, Justin C; Markin, Vladislav S
2010-01-01
Bioelectrochemical circuits operate in all plants including the sensitive plant Mimosa pudica Linn. The activation of biologically closed circuits with voltage gated ion channels can lead to various mechanical, hydrodynamical, physiological, biochemical and biophysical responses. Here the biologically closed electrochemical circuit in pinnae of Mimosa pudica is analyzed using the charged capacitor method for electrostimulation at different voltages. Also the equivalent electrical scheme of electrical signal transduction inside the plant's pinna is evaluated. These circuits remain linear at small potentials not exceeding 0.5 V. At higher potentials the circuits become strongly non-linear pointing to the opening of ion channels in plant tissues. Changing the polarity of electrodes leads to a strong rectification effect and to different kinetics of a capacitor. These effects can be caused by a redistribution of K+, Cl−, Ca2+ and H+ ions through voltage gated ion channels. The electrical properties of Mimosa pudica were investigated and equivalent electrical circuits within the pinnae were proposed to explain the experimental data. PMID:20448476
Molecular electronics in pinnae of Mimosa pudica.
Volkov, Alexander G; Foster, Justin C; Markin, Vladislav S
2010-07-01
Bioelectrochemical circuits operate in all plants including the sensitive plant Mimosa pudica Linn. The activation of biologically closed circuits with voltage gated ion channels can lead to various mechanical, hydrodynamical, physiological, biochemical, and biophysical responses. Here the biologically closed electrochemical circuit in pinnae of Mimosa pudica is analyzed using the charged capacitor method for electrostimulation at different voltages. Also the equivalent electrical scheme of electrical signal transduction inside the plant's pinna is evaluated. These circuits remain linear at small potentials not exceeding 0.5 V. At higher potentials the circuits become strongly non-linear pointing to the opening of ion channels in plant tissues. Changing the polarity of electrodes leads to a strong rectification effect and to different kinetics of a capacitor. These effects can be caused by a redistribution of K(+), Cl(-), Ca(2+), and H(+) ions through voltage gated ion channels. The electrical properties of Mimosa pudica were investigated and equivalent electrical circuits within the pinnae were proposed to explain the experimental data.
NASA Astrophysics Data System (ADS)
Giaccu, Gian Felice
2018-05-01
Pre-tensioned cable braces are widely used as bracing systems in various structural typologies. This technology is fundamentally utilized for stiffening purposes in the case of steel and timber structures. The pre-stressing force imparted to the braces provides to the system a remarkable increment of stiffness. On the other hand, the pre-tensioning force in the braces must be properly calibrated in order to satisfactorily meet both serviceability and ultimate limit states. Dynamic properties of these systems are however affected by non-linear behavior due to potential slackening of the pre-tensioned brace. In the recent years the author has been working on a similar problem regarding the non-linear response of cables in cable-stayed bridges and braced structures. In the present paper a displacement-based approach is used to examine the non-linear behavior of a building system. The methodology operates through linearization and allows obtaining an equivalent linearized frequency to approximately characterize, mode by mode, the dynamic behavior of the system. The equivalent frequency depends on both the mechanical characteristics of the system, the pre-tensioning level assigned to the braces and a characteristic vibration amplitude. The proposed approach can be used as a simplified technique, capable of linearizing the response of structural systems, characterized by non-linearity induced by the slackening of pre-tensioned braces.
Modularity-like objective function in annotated networks
NASA Astrophysics Data System (ADS)
Xie, Jia-Rong; Wang, Bing-Hong
2017-12-01
We ascertain the modularity-like objective function whose optimization is equivalent to the maximum likelihood in annotated networks. We demonstrate that the modularity-like objective function is a linear combination of modularity and conditional entropy. In contrast with statistical inference methods, in our method, the influence of the metadata is adjustable; when its influence is strong enough, the metadata can be recovered. Conversely, when it is weak, the detection may correspond to another partition. Between the two, there is a transition. This paper provides a concept for expanding the scope of modularity methods.
Weighted graph cuts without eigenvectors a multilevel approach.
Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian
2007-11-01
A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Lomax, Harvard
1950-01-01
Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.
Feedback-Equivalence of Nonlinear Systems with Applications to Power System Equations.
NASA Astrophysics Data System (ADS)
Marino, Riccardo
The key concept of the dissertation is feedback equivalence among systems affine in control. Feedback equivalence to linear systems in Brunovsky canonical form and the construction of the corresponding feedback transformation are used to: (i) design a nonlinear regulator for a detailed nonlinear model of a synchronous generator connected to an infinite bus; (ii) establish which power system network structures enjoy the feedback linearizability property and design a stabilizing control law for these networks with a constraint on the control space which comes from the use of d.c. lines. It is also shown that the feedback linearizability property allows the use of state feedback to contruct a linear controllable system with a positive definite linear Hamiltonian structure for the uncontrolled part if the state space is even; a stabilizing control law is derived for such systems. Feedback linearizability property is characterized by the involutivity of certain nested distributions for strongly accessible analytic systems; if the system is defined on a manifold M diffeomorphic to the Euclidean space, it is established that the set where the property holds is a submanifold open and dense in M. If an analytic output map is defined, a set of nested involutive distributions can be always defined and that allows the introduction of an observability property which is the dual concept, in some sense, to feedback linearizability: the goal is to investigate when a nonlinear system affine in control with an analytic output map is feedback equivalent to a linear controllable and observable system. Finally a nested involutive structure of distributions is shown to guarantee the existence of a state feedback that takes a nonlinear system affine in control to a single input one, both feedback equivalent to linear controllable systems, preserving one controlled vector field.
Linear and nonlinear dynamic analysis of redundant load path bearingless rotor systems
NASA Technical Reports Server (NTRS)
Murthy, V. R.
1985-01-01
The bearingless rotorcraft offers reduced weight, less complexity and superior flying qualities. Almost all the current industrial structural dynamic programs of conventional rotors which consist of single load path rotor blades employ the transfer matrix method to determine natural vibration characteristics because this method is ideally suited for one dimensional chain like structures. This method is extended to multiple load path rotor blades without resorting to an equivalent single load path approximation. Unlike the conventional blades, it isk necessary to introduce the axial-degree-of-freedom into the solution process to account for the differential axial displacements in the different load paths. With the present extension, the current rotor dynamic programs can be modified with relative ease to account for the multiple load paths without resorting to the equivalent single load path modeling. The results obtained by the transfer matrix method are validated by comparing with the finite element solutions. A differential stiffness matrix due to blade rotation is derived to facilitate the finite element solutions.
Soneja, Sutyajeet; Chen, Chen; Tielsch, James M.; Katz, Joanne; Zeger, Scott L.; Checkley, William; Curriero, Frank C.; Breysse, Patrick N.
2014-01-01
Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward. PMID:24950062
Soneja, Sutyajeet; Chen, Chen; Tielsch, James M; Katz, Joanne; Zeger, Scott L; Checkley, William; Curriero, Frank C; Breysse, Patrick N
2014-06-19
Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward.
Hecksel, D; Anferov, V; Fitzek, M; Shahnazi, K
2010-06-01
Conventional proton therapy facilities use double scattering nozzles, which are optimized for delivery of a few fixed field sizes. Similarly, uniform scanning nozzles are commissioned for a limited number of field sizes. However, cases invariably occur where the treatment field is significantly different from these fixed field sizes. The purpose of this work was to determine the impact of the radiation field conformity to the patient-specific collimator on the secondary neutron dose equivalent. Using a WENDI-II neutron detector, the authors experimentally investigated how the neutron dose equivalent at a particular point of interest varied with different collimator sizes, while the beam spreading was kept constant. The measurements were performed for different modes of dose delivery in proton therapy, all of which are available at the Midwest Proton Radiotherapy Institute (MPRI): Double scattering, uniform scanning delivering rectangular fields, and uniform scanning delivering circular fields. The authors also studied how the neutron dose equivalent changes when one changes the amplitudes of the scanned field for a fixed collimator size. The secondary neutron dose equivalent was found to decrease linearly with the collimator area for all methods of dose delivery. The relative values of the neutron dose equivalent for a collimator with a 5 cm diameter opening using 88 MeV protons were 1.0 for the double scattering field, 0.76 for rectangular uniform field, and 0.6 for the circular uniform field. Furthermore, when a single circle wobbling was optimized for delivery of a uniform field 5 cm in diameter, the secondary neutron dose equivalent was reduced by a factor of 6 compared to the double scattering nozzle. Additionally, when the collimator size was kept constant, the neutron dose equivalent at the given point of interest increased linearly with the area of the scanned proton beam. The results of these experiments suggest that the patient-specific collimator is a significant contributor to the secondary neutron dose equivalent to a distant organ at risk. Improving conformity of the radiation field to the patient-specific collimator can significantly reduce secondary neutron dose equivalent to the patient. Therefore, it is important to increase the number of available generic field sizes in double scattering systems as well as in uniform scanning nozzles.
NASA Astrophysics Data System (ADS)
Bisdom, Kevin; Bertotti, Giovanni; Nick, Hamidreza M.
2016-05-01
Predicting equivalent permeability in fractured reservoirs requires an understanding of the fracture network geometry and apertures. There are different methods for defining aperture, based on outcrop observations (power law scaling), fundamental mechanics (sublinear length-aperture scaling), and experiments (Barton-Bandis conductive shearing). Each method predicts heterogeneous apertures, even along single fractures (i.e., intrafracture variations), but most fractured reservoir models imply constant apertures for single fractures. We compare the relative differences in aperture and permeability predicted by three aperture methods, where permeability is modeled in explicit fracture networks with coupled fracture-matrix flow. Aperture varies along single fractures, and geomechanical relations are used to identify which fractures are critically stressed. The aperture models are applied to real-world large-scale fracture networks. (Sub)linear length scaling predicts the largest average aperture and equivalent permeability. Barton-Bandis aperture is smaller, predicting on average a sixfold increase compared to matrix permeability. Application of critical stress criteria results in a decrease in the fraction of open fractures. For the applied stress conditions, Coulomb predicts that 50% of the network is critically stressed, compared to 80% for Barton-Bandis peak shear. The impact of the fracture network on equivalent permeability depends on the matrix hydraulic properties, as in a low-permeable matrix, intrafracture connectivity, i.e., the opening along a single fracture, controls equivalent permeability, whereas for a more permeable matrix, absolute apertures have a larger impact. Quantification of fracture flow regimes using only the ratio of fracture versus matrix permeability is insufficient, as these regimes also depend on aperture variations within fractures.
Erel, Ozcan
2004-04-01
To develop a novel colorimetric and automated direct measurement method for total antioxidant capacity (TAC). A new generation, more stable, colored 2,2'-azinobis-(3-ethylbenzothiazoline-6-sulfonic acid radical cation (ABTS(*+)) was employed. The ABTS(*+) is decolorized by antioxidants according to their concentrations and antioxidant capacities. This change in color is measured as a change in absorbance at 660 nm. This process is applied to an automated analyzer and the assay is calibrated with Trolox. The novel assay is linear up to 6 mmol Trolox equivalent/l, its precision values are lower than 3%, and there is no interference from hemoglobin, bilirubin, EDTA, or citrate. The method developed is significantly correlated with the Randox- total antioxidant status (TAS) assay (r = 0.897, P < 0.0001; n = 91) and with the ferric reducing ability of plasma (FRAP) assay (r = 0.863, P < 0.0001; n = 110). Serum TAC level was lower in patients with major depression (1.69 +/- 0.11 mmol Trolox equivalent/l) than in healthy subjects (1.75 +/- 0.08 mmol Trolox equivalent/l, P = 0.041). This easy, stable, reliable, sensitive, inexpensive, and fully automated method described can be used to measure total antioxidant capacity.
NASA Astrophysics Data System (ADS)
Sharaf, J. M.; Saleh, H.
2015-05-01
The shielding properties of three different construction styles, and building materials, commonly used in Jordan, were evaluated using parameters such as attenuation coefficients, equivalent atomic number, penetration depth and energy buildup factor. Geometric progression (GP) method was used to calculate gamma-ray energy buildup factors of limestone, concrete, bricks, cement plaster and air for the energy range 0.05-3 MeV, and penetration depths up to 40 mfp. It has been observed that among the examined building materials, limestone offers highest value for equivalent atomic number and linear attenuation coefficient and the lowest values for penetration depth and energy buildup factor. The obtained buildup factors were used as basic data to establish the total equivalent energy buildup factors for three different multilayer construction styles using an iterative method. The three styles were then compared in terms of fractional transmission of photons at different incident photon energies. It is concluded that, in case of any nuclear accident, large multistory buildings with five layers exterior walls, style A, could effectively attenuate radiation more than small dwellings of any construction style.
Stochastic Stability of Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.
Modeling Percolation in Polymer Nanocomposites by Stochastic Microstructuring
Soto, Matias; Esteva, Milton; Martínez-Romero, Oscar; Baez, Jesús; Elías-Zúñiga, Alex
2015-01-01
A methodology was developed for the prediction of the electrical properties of carbon nanotube-polymer nanocomposites via Monte Carlo computational simulations. A two-dimensional microstructure that takes into account waviness, fiber length and diameter distributions is used as a representative volume element. Fiber interactions in the microstructure are identified and then modeled as an equivalent electrical circuit, assuming one-third metallic and two-thirds semiconductor nanotubes. Tunneling paths in the microstructure are also modeled as electrical resistors, and crossing fibers are accounted for by assuming a contact resistance associated with them. The equivalent resistor network is then converted into a set of linear equations using nodal voltage analysis, which is then solved by means of the Gauss–Jordan elimination method. Nodal voltages are obtained for the microstructure, from which the percolation probability, equivalent resistance and conductivity are calculated. Percolation probability curves and electrical conductivity values are compared to those found in the literature. PMID:28793594
THE EQUIVALENCE OF AGE IN ANIMALS
Brody, Samuel; Ragsdale, Arthur C.
1922-01-01
1. A method of plotting growth curves is presented which is considered more useful than the usual method in bringing out a number of important phenomena such as the equivalence of age in different animals, difference in the shape and duration of corresponding growth cycles in different animals, and also in determinating the age of maxima without resorting to complicated mathematical computations. 2. It is suggested that after the third cycle is past the conceptional age of the maximum of the third cycle may be taken as the age of reference for estimating the equivalent physiological ages in different animals. Before the age of the third cycle, the maxima of the second and first cycles are most conveniently used as points of reference. 3. It is shown that the product of the conceptional age of the maximum of the third cycle by 13, gives a value which is, with the possible exception of man, very near to the normal duration of life of animals under the most favorable conditions of life. In other words, the equivalent physiological ages in different animals bear an approximately constant linear relation to the duration of their growth periods. 4. Attention is called to certain differences in the shape and duration of the corresponding growth cycles in different animals and of the effect of sex on these cycles. PMID:19871989
Nonlinear effects of stretch on the flame front propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halter, F.; Tahtouh, T.; Mounaim-Rousselle, C.
2010-10-15
In all experimental configurations, the flames are affected by stretch (curvature and/or strain rate). To obtain the unstretched flame speed, independent of the experimental configuration, the measured flame speed needs to be corrected. Usually, a linear relationship linking the flame speed to stretch is used. However, this linear relation is the result of several assumptions, which may be incorrected. The present study aims at evaluating the error in the laminar burning speed evaluation induced by using the traditional linear methodology. Experiments were performed in a closed vessel at atmospheric pressure for two different mixtures: methane/air and iso-octane/air. The initial temperaturesmore » were respectively 300 K and 400 K for methane and iso-octane. Both methodologies (linear and nonlinear) are applied and results in terms of laminar speed and burned gas Markstein length are compared. Methane and iso-octane were chosen because they present opposite evolutions in their Markstein length when the equivalence ratio is increased. The error induced by the linear methodology is evaluated, taking the nonlinear methodology as the reference. It is observed that the use of the linear methodology starts to induce substantial errors after an equivalence ratio of 1.1 for methane/air mixtures and before an equivalence ratio of 1 for iso-octane/air mixtures. One solution to increase the accuracy of the linear methodology for these critical cases consists in reducing the number of points used in the linear methodology by increasing the initial flame radius used. (author)« less
Joshi, Sachin; Olsen, Daniel B; Dumitrescu, Cosmin; Puzinauskas, Paulius V; Yalin, Azer P
2009-05-01
In this contribution we present the first demonstration of simultaneous use of laser sparks for engine ignition and laser-induced breakdown spectroscopy (LIBS) measurements of in-cylinder equivalence ratios. A 1064 nm neodynium yttrium aluminum garnet (Nd:YAG) laser beam is used with an optical spark plug to ignite a single cylinder natural gas engine. The optical emission from the combustion initiating laser spark is collected through the optical spark plug and cycle-by-cycle spectra are analyzed for H(alpha)(656 nm), O(777 nm), and N(742 nm, 744 nm, and 746 nm) neutral atomic lines. The line area ratios of H(alpha)/O(777), H(alpha)/N(746), and H(alpha)/N(tot) (where N(tot) is the sum of areas of the aforementioned N lines) are correlated with equivalence ratios measured by a wide band universal exhaust gas oxygen (UEGO) sensor. Experiments are performed for input laser energy levels of 21 mJ and 26 mJ, compression ratios of 9 and 11, and equivalence ratios between 0.6 and 0.95. The results show a linear correlation (R(2) > 0.99) of line intensity ratio with equivalence ratio, thereby suggesting an engine diagnostic method for cylinder resolved equivalence ratio measurements.
A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Watts, Stephen R.
1995-01-01
This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.
Space Radiation Organ Doses for Astronauts on Past and Future Missions
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.
2007-01-01
We review methods and data used for determining astronaut organ dose equivalents on past space missions including Apollo, Skylab, Space Shuttle, NASA-Mir, and International Space Station (ISS). Expectations for future lunar missions are also described. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra, or a related quantity, the lineal energy (y) spectra that is measured by a tissue equivalent proportional counter (TEPC). These data are used in conjunction with space radiation transport models to project organ specific doses used in cancer and other risk projection models. Biodosimetry data from Mir, STS, and ISS missions provide an alternative estimate of organ dose equivalents based on chromosome aberrations. The physical environments inside spacecraft are currently well understood with errors in organ dose projections estimated as less than plus or minus 15%, however understanding the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons for which there are no human data to estimate risks. The accuracy of projections of organ dose equivalents described here must be supplemented with research on the health risks of space exposure to properly assess crew safety for exploration missions.
NASA Astrophysics Data System (ADS)
Miyake, Susumu; Kasashima, Takashi; Yamazaki, Masato; Okimura, Yasuyuki; Nagata, Hajime; Hosaka, Hiroshi; Morita, Takeshi
2018-07-01
The high power properties of piezoelectric transducers were evaluated considering a complex nonlinear elastic constant. The piezoelectric LCR equivalent circuit with nonlinear circuit parameters was utilized to measure them. The deformed admittance curve of piezoelectric transducers was measured under a high stress and the complex nonlinear elastic constant was calculated by curve fitting. Transducers with various piezoelectric materials, Pb(Zr,Ti)O3, (K,Na)NbO3, and Ba(Zr,Ti)O3–(Ba,Ca)TiO3, were investigated by the proposed method. The measured complex nonlinear elastic constant strongly depends on the linear elastic and piezoelectric constants. This relationship indicates that piezoelectric high power properties can be controlled by modifying the linear elastic and piezoelectric constants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yongge; Xu, Wei, E-mail: weixu@nwpu.edu.cn; Yang, Guidong
The Poisson white noise, as a typical non-Gaussian excitation, has attracted much attention recently. However, little work was referred to the study of stochastic systems with fractional derivative under Poisson white noise excitation. This paper investigates the stationary response of a class of quasi-linear systems with fractional derivative excited by Poisson white noise. The equivalent stochastic system of the original stochastic system is obtained. Then, approximate stationary solutions are obtained with the help of the perturbation method. Finally, two typical examples are discussed in detail to demonstrate the effectiveness of the proposed method. The analysis also shows that the fractionalmore » order and the fractional coefficient significantly affect the responses of the stochastic systems with fractional derivative.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu
2014-08-01
The Karhunen–Lòeve (KL) decomposition provides a low-dimensional representation for random fields as it is optimal in the mean square sense. Although for many stochastic systems of practical interest, described by stochastic partial differential equations (SPDEs), solutions possess this low-dimensional character, they also have a strongly time-dependent form and to this end a fixed-in-time basis may not describe the solution in an efficient way. Motivated by this limitation of standard KL expansion, Sapsis and Lermusiaux (2009) [26] developed the dynamically orthogonal (DO) field equations which allow for the simultaneous evolution of both the spatial basis where uncertainty ‘lives’ but also themore » stochastic characteristics of uncertainty. Recently, Cheng et al. (2013) [28] introduced an alternative approach, the bi-orthogonal (BO) method, which performs the exact same tasks, i.e. it evolves the spatial basis and the stochastic characteristics of uncertainty. In the current work we examine the relation of the two approaches and we prove theoretically and illustrate numerically their equivalence, in the sense that one method is an exact reformulation of the other. We show this by deriving a linear and invertible transformation matrix described by a matrix differential equation that connects the BO and the DO solutions. We also examine a pathology of the BO equations that occurs when two eigenvalues of the solution cross, resulting in an instantaneous, infinite-speed, internal rotation of the computed spatial basis. We demonstrate that despite the instantaneous duration of the singularity this has important implications on the numerical performance of the BO approach. On the other hand, it is observed that the BO is more stable in nonlinear problems involving a relatively large number of modes. Several examples, linear and nonlinear, are presented to illustrate the DO and BO methods as well as their equivalence.« less
NASA Technical Reports Server (NTRS)
Tulintseff, A. N.
1993-01-01
Printed dipole elements and their complement, linear slots, are elementary radiators that have found use in low-profile antenna arrays. Low-profile antenna arrays, in addition to their small size and low weight characteristics, offer the potential advantage of low-cost, high-volume production with easy integration with active integrated circuit components. The design of such arrays requires that the radiation and impedance characteristics of the radiating elements be known. The FDTD (Finite-Difference Time-Domain) method is a general, straight-forward implementation of Maxwell's equations and offers a relatively simple way of analyzing both printed dipole and slot elements. Investigated in this work is the application of the FDTD method to the analysis of printed dipole and slot elements transversely coupled to an infinite transmission line in a multilayered configuration. Such dipole and slot elements may be used in dipole and slot series-fed-type linear arrays, where element offsets and interelement line lengths are used to obtain the desired amplitude distribution and beam direction, respectively. The design of such arrays is achieved using transmission line theory with equivalent circuit models for the radiating elements. In an equivalent circuit model, the dipole represents a shunt impedance to the transmission line, where the impedance is a function of dipole offset, length, and width. Similarly, the slot represents a series impedance to the transmission line. The FDTD method is applied to single dipole and slot elements transversely coupled to an infinite microstrip line using a fixed rectangular grid with Mur's second order absorbing boundary conditions. Frequency-dependent circuit and scattering parameters are obtained by saving desired time-domain quantities and using the Fourier transform. A Gaussian pulse excitation is applied to the microstrip transmission line, where the resulting reflected signal due to the presence of the radiating element is used to determine the equivalent element impedance.
On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.
Assessing Measurement Equivalence in Ordered-Categorical Data
ERIC Educational Resources Information Center
Elosua, Paula
2011-01-01
Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…
Rebelo, M J; Rego, R; Ferreira, M; Oliveira, M C
2013-11-01
A comparative study of the antioxidant capacity and polyphenols content of Douro wines by chemical (ABTS and Folin-Ciocalteau) and electrochemical methods (cyclic voltammetry and differential pulse voltammetry) was performed. A non-linear correlation between cyclic voltammetric results and ABTS or Folin-Ciocalteau data was obtained if all types of wines (white, muscatel, ruby, tawny and red wines) are grouped together in the same correlation plot. In contrast, a very good linear correlation was observed between the electrochemical antioxidant capacity determined by differential pulse voltammetry and the radical scavenging activity of ABTS. It was also found that the antioxidant capacity of wines evaluated by the electrochemical methods (expressed as gallic acid equivalents) depend on background electrolyte of the gallic acid standards, type of electrochemical signal (current or charge) and electrochemical technique. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Complexity-reduced implementations of complete and null-space-based linear discriminant analysis.
Lu, Gui-Fu; Zheng, Wenming
2013-10-01
Dimensionality reduction has become an important data preprocessing step in a lot of applications. Linear discriminant analysis (LDA) is one of the most well-known dimensionality reduction methods. However, the classical LDA cannot be used directly in the small sample size (SSS) problem where the within-class scatter matrix is singular. In the past, many generalized LDA methods has been reported to address the SSS problem. Among these methods, complete linear discriminant analysis (CLDA) and null-space-based LDA (NLDA) provide good performances. The existing implementations of CLDA are computationally expensive. In this paper, we propose a new and fast implementation of CLDA. Our proposed implementation of CLDA, which is the most efficient one, is equivalent to the existing implementations of CLDA in theory. Since CLDA is an extension of null-space-based LDA (NLDA), our implementation of CLDA also provides a fast implementation of NLDA. Experiments on some real-world data sets demonstrate the effectiveness of our proposed new CLDA and NLDA algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wave propagation in equivalent continuums representing truss lattice materials
Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...
2015-07-29
Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less
Two conditions for equivalence of 0-norm solution and 1-norm solution in sparse representation.
Li, Yuanqing; Amari, Shun-Ichi
2010-07-01
In sparse representation, two important sparse solutions, the 0-norm and 1-norm solutions, have been receiving much of attention. The 0-norm solution is the sparsest, however it is not easy to obtain. Although the 1-norm solution may not be the sparsest, it can be easily obtained by the linear programming method. In many cases, the 0-norm solution can be obtained through finding the 1-norm solution. Many discussions exist on the equivalence of the two sparse solutions. This paper analyzes two conditions for the equivalence of the two sparse solutions. The first condition is necessary and sufficient, however, difficult to verify. Although the second is necessary but is not sufficient, it is easy to verify. In this paper, we analyze the second condition within the stochastic framework and propose a variant. We then prove that the equivalence of the two sparse solutions holds with high probability under the variant of the second condition. Furthermore, in the limit case where the 0-norm solution is extremely sparse, the second condition is also a sufficient condition with probability 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, N; Knutson, N; Schmidt, M
Purpose: To verify a method used to automatically acquire jaw, MLC, collimator and couch star shots for a Varian TrueBeam linear accelerator utilizing Developer Mode and an Electronic Portal Imaging Device (EPID). Methods: An XML script was written to automate motion of the jaws, MLC, collimator and couch in TrueBeam Developer Mode (TBDM) to acquire star shot measurements. The XML script also dictates MV imaging parameters to facilitate automatic acquisition and recording of integrated EPID images. Since couch star shot measurements cannot be acquired using a combination of EPID and jaw/MLC collimation alone due to a fixed imager geometry, amore » method utilizing a 5mm wide steel ruler placed on the table and centered within a 15×15cm2 open field to produce a surrogate of the narrow field aperture was investigated. Four individual star shot measurements (X jaw, Y jaw, MLC and couch) were obtained using our proposed as well as traditional film-based method. Integrated EPID images and scanned measurement films were analyzed and compared. Results: Star shot (X jaw, Y jaw, MLC and couch) measurements were obtained in a single 5 minute delivery using the TBDM XML script method compared to 60 minutes for equivalent traditional film measurements. Analysis of the images and films demonstrated comparable isocentricity results, agreeing within 0.3mm of each other. Conclusion: The presented automatic approach of acquiring star shot measurements using TBDM and EPID has proven to be more efficient than the traditional film approach with equivalent results.« less
Estimating linear-nonlinear models using Rényi divergences
Kouh, Minjoon; Sharpee, Tatyana O.
2009-01-01
This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data. PMID:19568981
Estimating linear-nonlinear models using Renyi divergences.
Kouh, Minjoon; Sharpee, Tatyana O
2009-01-01
This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.
Yan, Liang; Peng, Juanjuan; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-01
This paper proposes a novel permanent magnet linear motor possessing two movers and one stator. The two movers are isolated and can interact with the stator poles to generate independent forces and motions. Compared with conventional multiple motor driving system, it helps to increase the system compactness, and thus improve the power density and working efficiency. The magnetic field distribution is obtained by using equivalent magnetic circuit method. Following that, the formulation of force output considering armature reaction is carried out. Then inductances are analyzed with finite element method to investigate the relationships of the two movers. It is found that the mutual-inductances are nearly equal to zero, and thus the interaction between the two movers is negligible. A research prototype of the linear motor and a measurement apparatus on thrust force have been developed. Both numerical computation and experiment measurement are conducted to validate the analytical model of thrust force. Comparison shows that the analytical model matches the numerical and experimental results well.
Numerical Solution of Systems of Loaded Ordinary Differential Equations with Multipoint Conditions
NASA Astrophysics Data System (ADS)
Assanova, A. T.; Imanchiyev, A. E.; Kadirbayeva, Zh. M.
2018-04-01
A system of loaded ordinary differential equations with multipoint conditions is considered. The problem under study is reduced to an equivalent boundary value problem for a system of ordinary differential equations with parameters. A system of linear algebraic equations for the parameters is constructed using the matrices of the loaded terms and the multipoint condition. The conditions for the unique solvability and well-posedness of the original problem are established in terms of the matrix made up of the coefficients of the system of linear algebraic equations. The coefficients and the righthand side of the constructed system are determined by solving Cauchy problems for linear ordinary differential equations. The solutions of the system are found in terms of the values of the desired function at the initial points of subintervals. The parametrization method is numerically implemented using the fourth-order accurate Runge-Kutta method as applied to the Cauchy problems for ordinary differential equations. The performance of the constructed numerical algorithms is illustrated by examples.
Impacts analysis of car following models considering variable vehicular gap policies
NASA Astrophysics Data System (ADS)
Xin, Qi; Yang, Nan; Fu, Rui; Yu, Shaowei; Shi, Zhongke
2018-07-01
Due to the important roles playing in the vehicles' adaptive cruise control system, variable vehicular gap polices were employed to full velocity difference model (FVDM) to investigate the traffic flow properties. In this paper, two new car following models were put forward by taking constant time headway(CTH) policy and variable time headway(VTH) policy into optimal velocity function, separately. By steady state analysis of the new models, an equivalent optimal velocity function was defined. To determine the linear stable conditions of the new models, we introduce equivalent expressions of safe vehicular gap, and then apply small amplitude perturbation analysis and long terms of wave expansion techniques to obtain the new models' linear stable conditions. Additionally, the first order approximate solutions of the new models were drawn at the stable region, by transforming the models into typical Burger's partial differential equations with reductive perturbation method. The FVDM based numerical simulations indicate that the variable vehicular gap polices with proper parameters directly contribute to the improvement of the traffic flows' stability and the avoidance of the unstable traffic phenomena.
What is the best method for assessing lower limb force-velocity relationship?
Giroux, C; Rabita, G; Chollet, D; Guilhem, G
2015-02-01
This study determined the concurrent validity and reliability of force, velocity and power measurements provided by accelerometry, linear position transducer and Samozino's methods, during loaded squat jumps. 17 subjects performed squat jumps on 2 separate occasions in 7 loading conditions (0-60% of the maximal concentric load). Force, velocity and power patterns were averaged over the push-off phase using accelerometry, linear position transducer and a method based on key positions measurements during squat jump, and compared to force plate measurements. Concurrent validity analyses indicated very good agreement with the reference method (CV=6.4-14.5%). Force, velocity and power patterns comparison confirmed the agreement with slight differences for high-velocity movements. The validity of measurements was equivalent for all tested methods (r=0.87-0.98). Bland-Altman plots showed a lower agreement for velocity and power compared to force. Mean force, velocity and power were reliable for all methods (ICC=0.84-0.99), especially for Samozino's method (CV=2.7-8.6%). Our findings showed that present methods are valid and reliable in different loading conditions and permit between-session comparisons and characterization of training-induced effects. While linear position transducer and accelerometer allow for examining the whole time-course of kinetic patterns, Samozino's method benefits from a better reliability and ease of processing. © Georg Thieme Verlag KG Stuttgart · New York.
A computational algorithm for spacecraft control and momentum management
NASA Technical Reports Server (NTRS)
Dzielski, John; Bergmann, Edward; Paradiso, Joseph
1990-01-01
Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kok, H. Petra, E-mail: H.P.Kok@amc.uva.nl; Crezee, Johannes; Franken, Nicolaas A.P.
2014-03-01
Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normalmore » tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different treatment modalities.« less
Circuit topology of proteins and nucleic acids.
Mashaghi, Alireza; van Wijk, Roeland J; Tans, Sander J
2014-09-02
Folded biomolecules display a bewildering structural complexity and diversity. They have therefore been analyzed in terms of generic topological features. For instance, folded proteins may be knotted, have beta-strands arranged into a Greek-key motif, or display high contact order. In this perspective, we present a method to formally describe the topology of all folded linear chains and hence provide a general classification and analysis framework for a range of biomolecules. Moreover, by identifying the fundamental rules that intrachain contacts must obey, the method establishes the topological constraints of folded linear chains. We also briefly illustrate how this circuit topology notion can be applied to study the equivalence of folded chains, the engineering of artificial RNA structures and DNA origami, the topological structure of genomes, and the role of topology in protein folding. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yang, Qing; Fan, Liu-Yin; Huang, Shan-Sheng; Zhang, Wei; Cao, Cheng-Xi
2011-04-01
In this paper, we developed a novel method of acid-base titration, viz. the electromigration acid-base titration (EABT), via a moving neutralization boundary (MNR). With HCl and NaOH as the model strong acid and base, respectively, we conducted the experiments on the EABT via the method of moving neutralization boundary for the first time. The experiments revealed that (i) the concentration of agarose gel, the voltage used and the content of background electrolyte (KCl) had evident influence on the boundary movement; (ii) the movement length was a function of the running time under the constant acid and base concentrations; and (iii) there was a good linearity between the length and natural logarithmic concentration of HCl under the optimized conditions, and the linearity could be used to detect the concentration of acid. The experiments further manifested that (i) the RSD values of intra-day and inter-day runs were less than 1.59 and 3.76%, respectively, indicating similar precision and stability in capillary electrophoresis or HPLC; (ii) the indicators with different pK(a) values had no obvious effect on EABT, distinguishing strong influence on the judgment of equivalence-point titration in the classic one; and (iii) the constant equivalence-point titration always existed in the EABT, rather than the classic volumetric analysis. Additionally, the EABT could be put to good use for the determination of actual acid concentrations. The experimental results achieved herein showed a new general guidance for the development of classic volumetric analysis and element (e.g. nitrogen) content analysis in protein chemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Calculative techniques for transonic flows about certain classes of wing body combinations
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Spreiter, J. R.
1972-01-01
Procedures based on the method of local linearization and transonic equivalence rule were developed for predicting properties of transonic flows about certain classes of wing-body combinations. The procedures are applicable to transonic flows with free stream Mach number in the ranges near one, below the lower critical and above the upper critical. Theoretical results are presented for surface and flow field pressure distributions for both lifting and nonlifting situations.
An incremental strategy for calculating consistent discrete CFD sensitivity derivatives
NASA Technical Reports Server (NTRS)
Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.
1992-01-01
In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.
Response of a tissue equivalent proportional counter to neutrons
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Robbins, D. E.; Gibbons, F.; Braby, L. A.
2002-01-01
The absorbed dose as a function of lineal energy was measured at the CERN-EC Reference-field Facility (CERF) using a 512-channel tissue equivalent proportional counter (TEPC), and neutron dose equivalent response evaluated. Although there are some differences, the measured dose equivalent is in agreement with that measured by the 16-channel HANDI tissue equivalent counter. Comparison of TEPC measurements with those made by a silicon solid-state detector for low linear energy transfer particles produced by the same beam, is presented. The measurements show that about 4% of dose equivalent is delivered by particles heavier than protons generated in the conducting tissue equivalent plastic. c2002 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tang, Jiayu; Kayo, Issha; Takada, Masahiro
2011-09-01
We develop a maximum likelihood based method of reconstructing the band powers of the density and velocity power spectra at each wavenumber bin from the measured clustering features of galaxies in redshift space, including marginalization over uncertainties inherent in the small-scale, non-linear redshift distortion, the Fingers-of-God (FoG) effect. The reconstruction can be done assuming that the density and velocity power spectra depend on the redshift-space power spectrum having different angular modulations of μ with μ2n (n= 0, 1, 2) and that the model FoG effect is given as a multiplicative function in the redshift-space spectrum. By using N-body simulations and the halo catalogues, we test our method by comparing the reconstructed power spectra with the spectra directly measured from the simulations. For the spectrum of μ0 or equivalently the density power spectrum Pδδ(k), our method recovers the amplitudes to an accuracy of a few per cent up to k≃ 0.3 h Mpc-1 for both dark matter and haloes. For the power spectrum of μ2, which is equivalent to the density-velocity power spectrum Pδθ(k) in the linear regime, our method can recover, within the statistical errors, the input power spectrum for dark matter up to k≃ 0.2 h Mpc-1 and at both redshifts z= 0 and 1, if the adequate FoG model being marginalized over is employed. However, for the halo spectrum that is least affected by the FoG effect, the reconstructed spectrum shows greater amplitudes than the spectrum Pδθ(k) inferred from the simulations over a range of wavenumbers 0.05 ≤k≤ 0.3 h Mpc-1. We argue that the disagreement may be ascribed to a non-linearity effect that arises from the cross-bispectra of density and velocity perturbations. Using the perturbation theory and assuming Einstein gravity as in simulations, we derive the non-linear correction term to the redshift-space spectrum, and find that the leading-order correction term is proportional to μ2 and increases the μ2-power spectrum amplitudes more significantly at larger k, at lower redshifts and for more massive haloes. We find that adding the non-linearity correction term to the simulation Pδθ(k) can fairly well reproduce the reconstructed Pδθ(k) for haloes up to k≃ 0.2 h Mpc-1.
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
Baryon Acoustic Oscillations reconstruction with pixels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obuljen, Andrej; Villaescusa-Navarro, Francisco; Castorina, Emanuele
2017-09-01
Gravitational non-linear evolution induces a shift in the position of the baryon acoustic oscillations (BAO) peak together with a damping and broadening of its shape that bias and degrades the accuracy with which the position of the peak can be determined. BAO reconstruction is a technique developed to undo part of the effect of non-linearities. We present and analyse a reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixelsmore » becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm intensity mapping observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21 cm intensity mapping experiments.« less
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Equivalent Young's modulus of composite resin for simulation of stress during dental restoration.
Park, Jung-Hoon; Choi, Nak-Sam
2017-02-01
For shrinkage stress simulation in dental restoration, the elastic properties of composite resins should be acquired beforehand. This study proposes a formula to measure the equivalent Young's modulus of a composite resin through a calculation scheme of the shrinkage stress in dental restoration. Two types of composite resins remarkably different in the polymerization shrinkage strain were used for experimental verification: the methacrylate-type (Clearfil AP-X) and the silorane-type (Filtek P90). The linear shrinkage strains of the composite resins were gained through the bonded disk method. A formula to calculate the equivalent Young's moduli of composite resin was derived on the basis of the restored ring substrate. Equivalent Young's moduli were measured for the two types of composite resins through the formula. Those values were applied as input to a finite element analysis (FEA) for validation of the calculated shrinkage stress. Both of the measured moduli through the formula were appropriate for stress simulation of dental restoration in that the shrinkage stresses calculated by the FEA were in good agreement within 3.5% with the experimental values. The concept of equivalent Young's modulus so measured could be applied for stress simulation of 2D and 3D dental restoration. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Equivalent circuit-based analysis of CMUT cell dynamics in arrays.
Oguz, H K; Atalar, Abdullah; Köymen, Hayrettin
2013-05-01
Capacitive micromachined ultrasonic transducers (CMUTs) are usually composed of large arrays of closely packed cells. In this work, we use an equivalent circuit model to analyze CMUT arrays with multiple cells. We study the effects of mutual acoustic interactions through the immersion medium caused by the pressure field generated by each cell acting upon the others. To do this, all the cells in the array are coupled through a radiation impedance matrix at their acoustic terminals. An accurate approximation for the mutual radiation impedance is defined between two circular cells, which can be used in large arrays to reduce computational complexity. Hence, a performance analysis of CMUT arrays can be accurately done with a circuit simulator. By using the proposed model, one can very rapidly obtain the linear frequency and nonlinear transient responses of arrays with an arbitrary number of CMUT cells. We performed several finite element method (FEM) simulations for arrays with small numbers of cells and showed that the results are very similar to those obtained by the equivalent circuit model.
Guo, J.; Tsang, L.; Josberger, E.G.; Wood, A.W.; Hwang, J.-N.; Lettenmaier, D.P.
2003-01-01
This paper presents an algorithm that estimates the spatial distribution and temporal evolution of snow water equivalent and snow depth based on passive remote sensing measurements. It combines the inversion of passive microwave remote sensing measurements via dense media radiative transfer modeling results with snow accumulation and melt model predictions to yield improved estimates of snow depth and snow water equivalent, at a pixel resolution of 5 arc-min. In the inversion, snow grain size evolution is constrained based on pattern matching by using the local snow temperature history. This algorithm is applied to produce spatial snow maps of Upper Rio Grande River basin in Colorado. The simulation results are compared with that of the snow accumulation and melt model and a linear regression method. The quantitative comparison with the ground truth measurements from four Snowpack Telemetry (SNOTEL) sites in the basin shows that this algorithm is able to improve the estimation of snow parameters.
NASA Astrophysics Data System (ADS)
Avitabile, Peter; O'Callahan, John
2009-01-01
Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.
Pang, Haowen; Sun, Xiaoyang; Yang, Bo; Wu, Jingbo
2018-05-01
To ensure good quality intensity-modulated radiation therapy (IMRT) planning, we proposed the use of a quality control method based on generalized equivalent uniform dose (gEUD) that predicts absorbed radiation doses in organs at risk (OAR). We conducted a retrospective analysis of patients who underwent IMRT for the treatment of cervical carcinoma, nasopharyngeal carcinoma (NPC), or non-small cell lung cancer (NSCLC). IMRT plans were randomly divided into data acquisition and data verification groups. OAR in the data acquisition group for cervical carcinoma and NPC were further classified as sub-organs at risk (sOAR). The normalized volume of sOAR and normalized gEUD (a = 1) were analyzed using multiple linear regression to establish a fitting formula. For NSCLC, the normalized intersection volume of the planning target volume (PTV) and lung, the maximum diameter of the PTV (left-right, anterior-posterior, and superior-inferior), and the normalized gEUD (a = 1) were analyzed using multiple linear regression to establish a fitting formula for the lung gEUD (a = 1). The r-squared and P values indicated that the fitting formula was a good fit. In the data verification group, IMRT plans verified the accuracy of the fitting formula, and compared the gEUD (a = 1) for each OAR between the subjective method and the gEUD-based method. In conclusion, the gEUD-based method can be used effectively for quality control and can reduce the influence of subjective factors on IMRT planning optimization. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
ERIC Educational Resources Information Center
Arntzen, Erik; Grondahl, Terje; Eilifsen, Christoffer
2010-01-01
Previous studies comparing groups of subjects have indicated differential probabilities of stimulus equivalence outcome as a function of training structures. One-to-Many (OTM) and Many-to-One (MTO) training structures seem to produce positive outcomes on tests for stimulus equivalence more often than a Linear Series (LS) training structure does.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.
An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems.more » Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.« less
Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.
2014-01-01
This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390
Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł
2007-04-21
A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.
On the use of Lineal Energy Measurements to Estimate Linear Energy Transfer Spectra
NASA Technical Reports Server (NTRS)
Adams, David A.; Howell, Leonard W., Jr.; Adam, James H., Jr.
2007-01-01
This paper examines the error resulting from using a lineal energy spectrum to represent a linear energy transfer spectrum for applications in the space radiation environment. Lineal energy and linear energy transfer spectra are compared in three diverse but typical space radiation environments. Different detector geometries are also studied to determine how they affect the error. LET spectra are typically used to compute dose equivalent for radiation hazard estimation and single event effect rates to estimate radiation effects on electronics. The errors in the estimations of dose equivalent and single event rates that result from substituting lineal energy spectra for linear energy spectra are examined. It is found that this substitution has little effect on dose equivalent estimates in interplanetary quiet-time environment regardless of detector shape. The substitution has more of an effect when the environment is dominated by solar energetic particles or trapped radiation, but even then the errors are minor especially if a spherical detector is used. For single event estimation, the effect of the substitution can be large if the threshold for the single event effect is near where the linear energy spectrum drops suddenly. It is judged that single event rate estimates made from lineal energy spectra are unreliable and the use of lineal energy spectra for single event rate estimation should be avoided.
NASA Technical Reports Server (NTRS)
Coen, Peter G.
1991-01-01
A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.
Analysis of high aspect ratio jet flap wings of arbitrary geometry.
NASA Technical Reports Server (NTRS)
Lissaman, P. B. S.
1973-01-01
Paper presents a design technique for rapidly computing lift, induced drag, and spanwise loading of unswept jet flap wings of arbitrary thickness, chord, twist, blowing, and jet angle, including discontinuities. Linear theory is used, extending Spence's method for elliptically loaded jet flap wings. Curves for uniformly blown rectangular wings are presented for direct performance estimation. Arbitrary planforms require a simple computer program. Method of reducing wing to equivalent stretched, twisted, unblown planform for hand calculation is also given. Results correlate with limited existing data, and show lifting line theory is reasonable down to aspect ratios of 5.
Study on Determination Method of Fatigue Testing Load for Wind Turbine Blade
NASA Astrophysics Data System (ADS)
Liao, Gaohua; Wu, Jianzhong
2017-07-01
In this paper, the load calculation method of the fatigue test was studied for the wind turbine blade under uniaxial loading. The characteristics of wind load and blade equivalent load were analyzed. The fatigue property and damage theory of blade material were studied. The fatigue load for 2MW blade was calculated by Bladed, and the stress calculated by ANSYS. Goodman modified exponential function S-N curve and linear cumulative damage rule were used to calculate the fatigue load of wind turbine blades. It lays the foundation for the design and experiment of wind turbine blade fatigue loading system.
NASA Technical Reports Server (NTRS)
Choudhury, A. K.; Djalali, M.
1975-01-01
In this recursive method proposed, the gain matrix for the Kalman filter and the convariance of the state vector are computed not via the Riccati equation, but from certain other equations. These differential equations are of Chandrasekhar-type. The 'invariant imbedding' idea resulted in the reduction of the basic boundary value problem of transport theory to an equivalent initial value system, a significant computational advance. Initial value experience showed that there is some computational savings in the method and the loss of positive definiteness of the covariance matrix is less vulnerable.
Deconvolution Methods and Systems for the Mapping of Acoustic Sources from Phased Microphone Arrays
NASA Technical Reports Server (NTRS)
Humphreys, Jr., William M. (Inventor); Brooks, Thomas F. (Inventor)
2012-01-01
Mapping coherent/incoherent acoustic sources as determined from a phased microphone array. A linear configuration of equations and unknowns are formed by accounting for a reciprocal influence of one or more cross-beamforming characteristics thereof at varying grid locations among the plurality of grid locations. An equation derived from the linear configuration of equations and unknowns can then be iteratively determined. The equation can be attained by the solution requirement of a constraint equivalent to the physical assumption that the coherent sources have only in phase coherence. The size of the problem may then be reduced using zoning methods. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with a phased microphone array (microphones arranged in an optimized grid pattern including a plurality of grid locations) in order to compile an output presentation thereof, thereby removing beamforming characteristics from the resulting output presentation.
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
Deconvolution methods and systems for the mapping of acoustic sources from phased microphone arrays
NASA Technical Reports Server (NTRS)
Brooks, Thomas F. (Inventor); Humphreys, Jr., William M. (Inventor)
2010-01-01
A method and system for mapping acoustic sources determined from a phased microphone array. A plurality of microphones are arranged in an optimized grid pattern including a plurality of grid locations thereof. A linear configuration of N equations and N unknowns can be formed by accounting for a reciprocal influence of one or more beamforming characteristics thereof at varying grid locations among the plurality of grid locations. A full-rank equation derived from the linear configuration of N equations and N unknowns can then be iteratively determined. A full-rank can be attained by the solution requirement of the positivity constraint equivalent to the physical assumption of statically independent noise sources at each N location. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with the phased microphone array in order to compile an output presentation thereof, thereby removing the beamforming characteristics from the resulting output presentation.
Radiative transfer calculated from a Markov chain formalism
NASA Technical Reports Server (NTRS)
Esposito, L. W.; House, L. L.
1978-01-01
The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.
A comparison of linear and non-linear data assimilation methods using the NEMO ocean model
NASA Astrophysics Data System (ADS)
Kirchgessner, Paul; Tödter, Julian; Nerger, Lars
2015-04-01
The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.
NASA Astrophysics Data System (ADS)
Yang, B. D.; Chu, M. L.; Menq, C. H.
1998-03-01
Mechanical systems in which moving components are mutually constrained through contacts often lead to complex contact kinematics involving tangential and normal relative motions. A friction contact model is proposed to characterize this type of contact kinematics that imposes both friction non-linearity and intermittent separation non-linearity on the system. The stick-slip friction phenomenon is analyzed by establishing analytical criteria that predict the transition between stick, slip, and separation of the interface. The established analytical transition criteria are particularly important to the proposed friction contact model for the transition conditions of the contact kinematics are complicated by the effect of normal load variation and possible interface separation. With these transition criteria, the induced friction force on the contact plane and the variable normal load perpendicular to the contact plane, can be predicted for any given cyclic relative motions at the contact interface and hysteresis loops can be produced so as to characterize the equivalent damping and stiffness of the friction contact. These-non-linear damping and stiffness methods along with the harmonic balance method are then used to predict the resonant response of a frictionally constrained two-degree-of-freedom oscillator. The predicted results are compared with those of the time integration method and the damping effect, the resonant frequency shift, and the jump phenomenon are examined.
A new performance index for the repetitive motion of mobile manipulators.
Xiao, Lin; Zhang, Yunong
2014-02-01
A mobile manipulator is a robotic device composed of a mobile platform and a stationary manipulator fixed to the platform. To achieve the repetitive motion control of mobile manipulators, the mobile platform and the manipulator have to realize the repetitive motion simultaneously. To do so, a novel quadratic performance index is, for the first time, designed and presented in this paper, of which the effectiveness is analyzed by following a neural dynamics method. Then, a repetitive motion scheme is proposed by combining the criterion, physical constraints, and integrated kinematical equations of mobile manipulators, which is further reformulated as a quadratic programming (QP) subject to equality and bound constraints. In addition, two important Bridge theorems are established to prove that such a QP can be converted equivalently into a linear variational inequality, and then equivalently into a piecewise-linear projection equation (PLPE). A real-time numerical algorithm based on PLPE is thus developed and applied for the online solution of the resultant QP. Two tracking-path tasks demonstrate the effectiveness and accuracy of the repetitive motion scheme. In addition, comparisons between the nonrepetitive and repetitive motion further validate the superiority and novelty of the proposed scheme.
Ivandini, Tribidasari A; Saepudin, Endang; Wardah, Habibah; Harmesa; Dewangga, Netra; Einaga, Yasuaki
2012-11-20
Gold-modified boron doped diamond (BDD) electrodes were examined for the amperometric detection of oxygen as well as a detector for measuring biochemical oxygen demand (BOD) using Rhodotorula mucilaginosa UICC Y-181. An optimum potential of -0.5 V (vs Ag/AgCl) was applied, and the optimum waiting time was observed to be 20 min. A linear calibration curve for oxygen reduction was achieved with a sensitivity of 1.4 μA mg(-1) L oxygen. Furthermore, a linear calibration curve in the glucose concentration range of 0.1-0.5 mM (equivalent to 10-50 mg L(-1) BOD) was obtained with an estimated detection limit of 4 mg L(-1) BOD. Excellent reproducibility of the BOD sensor was shown with an RSD of 0.9%. Moreover, the BOD sensor showed good tolerance against the presence of copper ions up to a maximum concentration of 0.80 μM (equivalent to 50 ppb). The sensor was applied to BOD measurements of the water from a lake at the University of Indonesia in Jakarta, Indonesia, with results comparable to those made using a standard method for BOD measurement.
NASA Astrophysics Data System (ADS)
Zhang, Shengli; Zhang, Xiangdong
2018-04-01
Photon catalysis is an intriguing quantum mechanical operation during which no photon is added to or subtracted from the relevant optical system. However, we prove that photon catalysis is in essence equivalent to the simpler but more efficient noiseless linear amplifier. This provides a simple and zero-energy-input method for enhancing quantum coherence. We show that the coherence enhancement holds both for a coherent state and a two-mode squeezed vacuum (TMSV) state. For the TMSV state, biside photon catalysis is shown to be equivalent to two times the single-side photon catalysis, and two times the photon catalysis does not provide a substantial enhancement of quantum coherence compared with single-side catalysis. We further extend our investigation to the performance of coherence enhancement with a more realistic photon catalysis scheme where a heralded approximated single-photon state and an on-off detector are exploited. Moreover, we investigate the influence of an imperfect photon detector and the result shows that the amplification effect of photon catalysis is insensitive to the detector inefficiency. Finally, we apply the coherence measure to quantum illumination and see the same trend of performance improvement as coherence enhancement is identified in practical quantum target detection.
1984-05-25
PLOT IS LINEAR-LINEAR ** C 50 LINET *0 *1’LINES *0 73 C CALL SCLI (XMN.XMX.XORG,XSTP.XEND) CALL SCL I( YMN. YMXYORG. YSTP.NEND) WRITE(6,2303... SCLI XMNXMX~ *.2(FS.4,2X)) SMIN =0.00006 S(1) z 0.00012 S(2) -0.00018 S13) =0.00024 S(4) =0.00030 .%4 S(5) =0. 00036 S(6) = 0.00060 S(7) =0.00120 C DIF...DSTP a SM!Pd/3.0 C 75 *.1z 99 £006 a DORG ASTP a DSTP * AMAX a DUSAX WRITE(6.2303) DORG.DSTP.DMAX 2303 FOOUAT(SX.s LEAVING SCLI *.3(F8.4.2x)) C
Simplified planar model of a car steering system with rack and pinion and McPherson suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-09-01
The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.
NASA Technical Reports Server (NTRS)
Shavers, M. R.; Poston, J. W.; Cucinotta, F. A.; Wilson, J. W.
1996-01-01
During manned space missions, high-energy nucleons of cosmic and solar origin collide with atomic nuclei of the human body and produce a broad linear energy transfer spectrum of secondary particles, called target fragments. These nuclear fragments are often more biologically harmful than the direct ionization of the incident nucleon. That these secondary particles increase tissue absorbed dose in regions adjacent to the bone-soft tissue interface was demonstrated in a previous publication. To assess radiological risks to tissue near the bone-soft tissue interface, a computer transport model for nuclear fragments produced by high energy nucleons was used in this study to calculate integral linear energy transfer spectra and dose equivalents resulting from nuclear collisions of 1-GeV protons transversing bone and red bone marrow. In terms of dose equivalent averaged over trabecular bone marrow, target fragments emitted from interactions in both tissues are predicted to be at least as important as the direct ionization of the primary protons-twice as important, if recently recommended radiation weighting factors and "worst-case" geometry are used. The use of conventional dosimetry (absorbed dose weighted by aa linear energy transfer-dependent quality factor) as an appropriate framework for predicting risk from low fluences of high-linear energy transfer target fragments is discussed.
Equivalent reduced model technique development for nonlinear system dynamic response
NASA Astrophysics Data System (ADS)
Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2013-04-01
The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.
2011-09-01
with the bilinear plasticity relation. We used the bilinear relation, which allowed a full range of hardening from isotropic to kinematic to be...43 Table 12. Verification of the Weight Function Method for Single Corner Crack at a Hole in an Infinite ...determine the “Young’s Modulus,” or the slope of the linear region of the curve, the experimental data is curve fit with
2015-10-15
Munsell Color • Light Attenuation and Turbidity • Sea turtle nesting • Conclusions • Traditional vs. Cross Shore Swash Zone Placement • Acknowledgments...Light Attenuation Long-term Monitoring Dredging 19 Nov. – 28 Dec. Dredging 21 Jan. – 6 Mar. BUILDING STRONG® Sea Turtle Nesting 2015 Traditional...Traditional Placement • Less linear feet of beach impacted for equivalent volume • Reduced environmental Impacts • Turtle nest relocations • Ponding
Oracle estimation of parametric models under boundary constraints.
Wong, Kin Yau; Goldberg, Yair; Fine, Jason P
2016-12-01
In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.
Zheng, Jie; Pritts, Wayne A; Zhang, Shuhong; Wittenberger, Steve
2009-12-05
Dimethyl sulfate (DMS) is an alkylating reagent commonly used in organic syntheses and pharmaceutical manufacturing processes. Due to its potential carcinogenicity, the level of DMS in the API process needs to be carefully monitored. However, in-process testing for DMS is challenging because of its reactivity and polarity as well as complex matrix effects. In this short communication, we report a GC-MS method for determination of DMS in an API intermediate that is a methyl sulfate salt. To overcome the complex matrix interference, DMS and an internal standard, d6-DMS, were extracted from the matrix with methyl tert-butyl ether. GC separation was conducted on a DB-624 column (30 m long, 0.32 mm ID, 1.8 microm film thickness). MS detection was performed on a single-quad Agilent MSD equipped with an electron impact source while the MSD signal was acquired in selected ion monitoring mode. This GC/MS method showed a linear response for DMS equivalent from 1.0 to 60 ppm. The practical quantitation limit for DMS was 1.0 ppm and the practical detection limit was 0.3 ppm. The relative standard derivation for analyte response was found as 0.1% for six injections of a working standard equivalent to 18.6 ppm of DMS. The spike recovery was ranged from 102.1 to 108.5% for a sample of API intermediate spiked with 8.0 ppm of DMS. In summary, the GC/MS method showed adequate specificity, linearity, sensitivity, repeatability and accuracy for determination of DMS in the API intermediate. This method has been successfully applied to study the efficiency of removing DMS from the process.
Transport synthetic acceleration with opposing reflecting boundary conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zika, M.R.; Adams, M.L.
2000-02-01
The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less
Application of Fast Multipole Methods to the NASA Fast Scattering Code
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Dhainaut, Jean-Michel
2000-01-01
The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.
NASA Technical Reports Server (NTRS)
Sloss, J. M.; Kranzler, S. K.
1972-01-01
The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.
Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2007-01-01
Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.
Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions
Burke, Timothy P.; Kiedrowski, Brian C.
2017-12-11
Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2018-03-01
The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference array) and fragments with diferent dimensions for clustering is carried out. The experiments, using the software environment Mathcad showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. The experimental results show that such models can be successfully used for auto- and hetero-associative recognition. Also they can be used to explain some mechanisms, known as "the reinforcementinhibition concept". Also we demonstrate a real model experiments, which confirm that the nonlinear processing by equivalent function allow to determine the neuron-winners and customize the weight matrix. At the end of the report, we will show how to use the obtained results and to propose new more efficient hardware architecture of SL_EC_RMNS based on matrix-tensor multipliers. Also we estimate the parameters and performance of such architectures.
Dong, Jing; Zhang, Zhe-chen; Zhou, Guo-liang
2015-06-01
To analyze the stress distribution in periodontal ligament of maxillary first molar during distal movement with nonlinear finite element analysis, and to compare it with the result of linear finite element analysis, consequently to provide biomechanical evidence for clinical application. The 3-D finite element model including a maxillary first molar, periodontal ligament, alveolar bone, cancellous bone, cortical bone and a buccal tube was built up by using Mimics, Geomagic, ProE and Ansys Workbench. The material of periodontal ligament was set as nonlinear material and linear elastic material, respectively. Loads of different combinations were applied to simulate the clinical situation of distalizing the maxillary first molar. There were channels of low stress in peak distribution of Von Mises equivalent stress and compressive stress of periodontal ligament in nonlinear finite element model. The peak of Von Mises equivalent stress was lower when it was satisfied that Mt/F minus Mr/F approximately equals 2. The peak of compressive stress was lower when it was satisfied that Mt/F was approximately equal to Mr/F. The relative stress of periodontal ligament was higher and violent in linear finite element model and there were no channels of low stress in peak distribution. There are channels in which stress of periodontal ligament is lower. The condition of low stress should be satisfied by applied M/F during the course of distalizing the maxillary first molar.
Equivalent equations of motion for gravity and entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel
We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.
Equivalent equations of motion for gravity and entropy
Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; ...
2017-02-01
We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.
Aeroelastic Stability of Rotor Blades Using Finite Element Analysis
NASA Technical Reports Server (NTRS)
Chopra, I.; Sivaneri, N.
1982-01-01
The flutter stability of flap bending, lead-lag bending, and torsion of helicopter rotor blades in hover is investigated using a finite element formulation based on Hamilton's principle. The blade is divided into a number of finite elements. Quasi-steady strip theory is used to evaluate the aerodynamic loads. The nonlinear equations of motion are solved for steady-state blade deflections through an iterative procedure. The equations of motion are linearized assuming blade motion to be a small perturbation about the steady deflected shape. The normal mode method based on the coupled rotating natural modes is used to reduce the number of equations in the flutter analysis. First the formulation is applied to single-load-path blades (articulated and hingeless blades). Numerical results show very good agreement with existing results obtained using the modal approach. The second part of the application concerns multiple-load-path blades, i.e. bearingless blades. Numerical results are presented for several analytical models of the bearingless blade. Results are also obtained using an equivalent beam approach wherein a bearingless blade is modelled as a single beam with equivalent properties. Results show the equivalent beam model.
Effects of optical blur reduction on equivalent intrinsic blur.
Kord Valeshabad, Ali; Wanek, Justin; McAnany, J Jason; Shahidi, Mahnaz
2015-04-01
To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Twelve visually normal subjects (mean [±SD] age, 31 [±12] years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) caused by high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. σopt and σint were significantly reduced and visual acuity was significantly improved after AO correction (p ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, p ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (p = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, p < 0.001), and the two parameters were related linearly with a slope of 0.46. Reduction in equivalent intrinsic blur was greater than the reduction in optical blur after AO correction of wavefront error. This finding implies that visual acuity in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone.
Wind turbine sound pressure level calculations at dwellings.
Keith, Stephen E; Feder, Katya; Voicescu, Sonia A; Soukhovtsev, Victor; Denning, Allison; Tsang, Jason; Broner, Norm; Leroux, Tony; Richarz, Werner; van den Berg, Frits
2016-03-01
This paper provides calculations of outdoor sound pressure levels (SPLs) at dwellings for 10 wind turbine models, to support Health Canada's Community Noise and Health Study. Manufacturer supplied and measured wind turbine sound power levels were used to calculate outdoor SPL at 1238 dwellings using ISO [(1996). ISO 9613-2-Acoustics] and a Swedish noise propagation method. Both methods yielded statistically equivalent results. The A- and C-weighted results were highly correlated over the 1238 dwellings (Pearson's linear correlation coefficient r > 0.8). Calculated wind turbine SPLs were compared to ambient SPLs from other sources, estimated using guidance documents from the United States and Alberta, Canada.
Solvent effects in time-dependent self-consistent field methods. I. Optical response calculations
Bjorgaard, J. A.; Kuzmenko, V.; Velizhanin, K. A.; ...
2015-01-22
In this study, we implement and examine three excited state solvent models in time-dependent self-consistent field methods using a consistent formalism which unambiguously shows their relationship. These are the linear response, state specific, and vertical excitation solvent models. Their effects on energies calculated with the equivalent of COSMO/CIS/AM1 are given for a set of test molecules with varying excited state charge transfer character. The resulting solvent effects are explained qualitatively using a dipole approximation. It is shown that the fundamental differences between these solvent models are reflected by the character of the calculated excitations.
Cold-air performance of a tip turbine designed to drive a lift fan
NASA Technical Reports Server (NTRS)
Haas, J. E.; Kofskey, M. G.; Hotz, G. M.
1978-01-01
Performance was obtained over a range of speeds and pressure ratios for a 0.4 linear scale version of the LF460 lift fan turbine with the rotor radial tip clearance reduced to about 2.5 percent of the rotor blade height. These tests covered a range of speeds from 60 to 140 percent of design equivalent speed and a range of scroll inlet total to diffuser exit static pressure ratios from 2.6 to 4.2. Results are presented in terms of equivalent mass flow, equivalent torque, equivalent specific work, and efficiency.
Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel
2015-01-01
The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.
Nature of size effects in compact models of field effect transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torkhov, N. A., E-mail: trkf@mail.ru; Scientific-Research Institute of Semiconductor Devices, Tomsk 634050; Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050
Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of themore » equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.« less
Ortiz, Rocío; Antilén, Mónica; Speisky, Hernán; Aliaga, Margarita E; López-Alarcón, Camilo; Baugh, Steve
2012-01-01
A method was developed for microplate-based oxygen radicals absorbance capacity (ORAC) using pyrogallol red (PGR) as probe (ORAC-PGR). The method was evaluated for linearity, precision, and accuracy. In addition, the antioxidant capacity of commercial beverages, such as wines, fruit juices, and iced teas, was measured. Linearity of the area under the curve (AUC) versus Trolox concentration plots was [AUC = (845 +/- 110) + (23 +/- 2) [Trolox, microM]; R = 0.9961, n = 19]. Analyses showed better precision and accuracy at the highest Trolox concentration (40 microM) with RSD and recovery (REC) values of 1.7 and 101.0%, respectively. The method also showed good linearity for red wine [AUC = (787 +/- 77) + (690 +/- 60) [red wine, microL/mL]; R = 0.9926, n = 17], precision and accuracy with RSD values from 1.4 to 8.3%, and REC values that ranged from 89.7 to 103.8%. Red wines showed higher ORAC-PGR values than white wines, while the ORAC-PGR index of fruit juices and iced teas presented a wide range of results, from 0.6 to 21.6 mM of Trolox equivalents. Product-to-product variability was also observed for juices of the same fruit, showing the differences between brands on the ORAC-PGR index.
NASA Astrophysics Data System (ADS)
Fujibuchi, Toshioh; Kodaira, Satoshi; Sawaguchi, Fumiya; Abe, Yasuyuki; Obara, Satoshi; Yamaguchi, Masae; Kawashima, Hajime; Kitamura, Hisashi; Kurano, Mieko; Uchihori, Yukio; Yasuda, Nakahiro; Koguchi, Yasuhiro; Nakajima, Masaru; Kitamura, Nozomi; Sato, Tomoharu
2015-04-01
We measured the recoil charged particles from secondary neutrons produced by the photonuclear reaction in a water phantom from a 10-MV photon beam from medical linacs. The absorbed dose and the dose equivalent were evaluated from the linear energy transfer (LET) spectrum of recoils using the CR-39 plastic nuclear track detector (PNTD) based on well-established methods in the field of space radiation dosimetry. The contributions and spatial distributions of these in the phantom on nominal photon exposures were verified as the secondary neutron dose and neutron dose equivalent. The neutron dose equivalent normalized to the photon-absorbed dose was 0.261 mSv/100 MU at source to chamber distance 90 cm. The dose equivalent at the surface gave the highest value, and was attenuated to less than 10% at 5 cm from the surface. The dose contribution of the high LET component of ⩾100 keV/μm increased with the depth in water, resulting in an increase of the quality factor. The CR-39 PNTD is a powerful tool that can be used to systematically measure secondary neutron dose distributions in a water phantom from an in-field to out-of-field high-intensity photon beam.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-12
... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...
Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.
NASA Astrophysics Data System (ADS)
Le, Loc Xuan
1987-09-01
A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.
Farney, Robert J.; Walker, Brandon S.; Farney, Robert M.; Snow, Gregory L.; Walker, James M.
2011-01-01
Background: Various models and questionnaires have been developed for screening specific populations for obstructive sleep apnea (OSA) as defined by the apnea/hypopnea index (AHI); however, almost every method is based upon dichotomizing a population, and none function ideally. We evaluated the possibility of using the STOP-Bang model (SBM) to classify severity of OSA into 4 categories ranging from none to severe. Methods: Anthropomorphic data and the presence of snoring, tiredness/sleepiness, observed apneas, and hypertension were collected from 1426 patients who underwent diagnostic polysomnography. Questionnaire data for each patient was converted to the STOP-Bang equivalent with an ordinal rating of 0 to 8. Proportional odds logistic regression analysis was conducted to predict severity of sleep apnea based upon the AHI: none (AHI < 5/h), mild (AHI ≥ 5 to < 15/h), moderate (≥ 15 to < 30/h), and severe (AHI ≥ 30/h). Results: Linear, curvilinear, and weighted models (R2 = 0.245, 0.251, and 0.269, respectively) were developed that predicted AHI severity. The linear model showed a progressive increase in the probability of severe (4.4% to 81.9%) and progressive decrease in the probability of none (52.5% to 1.1%). The probability of mild or moderate OSA initially increased from 32.9% and 10.3% respectively (SBM score 0) to 39.3% (SBM score 2) and 31.8% (SBM score 4), after which there was a progressive decrease in probabilities as more patients fell into the severe category. Conclusions: The STOP-Bang model may be useful to categorize OSA severity, triage patients for diagnostic evaluation or exclude from harm. Citation: Farney RJ; Walker BS; Farney RM; Snow GL; Walker JM. The STOP-Bang equivalent model and prediction of severity of obstructive sleep apnea: relation to polysomnographic measurements of the apnea/hypopnea index. J Clin Sleep Med 2011;7(5):459-465. PMID:22003340
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keck, B D; Ognibene, T; Vogel, J S
2010-02-05
Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of {sup 14}C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of anymore » separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of {sup 14}C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the {sup 14}C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with {sup 14}C corresponds to 30 fg equivalents. AMS provides an sensitive, accurate and precise method of measuring drug compounds in biological matrices.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-11
... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...
NASA Astrophysics Data System (ADS)
Guo, Mengchao; Zhou, Kan; Wang, Xiaokun; Zhuang, Haiyan; Tang, Dongming; Zhang, Baoshan; Yang, Yi
2018-04-01
In this paper, the impact of coupling between unit cells on the performance of linear-to-circular polarization conversion metamaterial with half transmission and half reflection is analyzed by changing the distance between the unit cells. An equivalent electrical circuit model is then built to explain it based on the analysis. The simulated results show that, when the distance between the unit cells is 23 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected left-hand circularly-polarized wave and converts the other half of it into transmitted left-hand circularly-polarized wave at 4.4 GHz; when the distance is 28 mm, this metamaterial reflects all of the incident linearly-polarized wave at 4.4 GHz; and when the distance is 32 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected right-hand circularly-polarized wave and converts the other half of it into transmitted right-hand circularly-polarized wave at 4.4 GHz. The tunability is realized successfully. The analysis shows that the changes of coupling between unit cells lead to the changes of performance of this metamaterial. The coupling between the unit cells is then considered when building the equivalent electrical circuit model. The built equivalent electrical circuit model can be used to perfectly explain the simulated results, which confirms the validity of it. It can also give help to the design of tunable polarization conversion metamaterials.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-03
... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-07
... Monitoring Reference and Equivalent Methods; Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of ozone (O 3 ) in the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-18
... Monitoring Reference and Equivalent Methods: Designation of Two New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of two new equivalent methods for monitoring ambient air... accordance with 40 CFR Part 53, two new equivalent methods for measuring concentrations of PM 10 and sulfur...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-27
... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of ozone (O 3 ) in the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-28
... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-04
... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...
Design and Analysis of Tubular Permanent Magnet Linear Wave Generator
Si, Jikai; Feng, Haichao; Su, Peng; Zhang, Lufeng
2014-01-01
Due to the lack of mature design program for the tubular permanent magnet linear wave generator (TPMLWG) and poor sinusoidal characteristics of the air gap flux density for the traditional surface-mounted TPMLWG, a design method and a new secondary structure of TPMLWG are proposed. An equivalent mathematical model of TPMLWG is established to adopt the transformation relationship between the linear velocity of permanent magnet rotary generator and the operating speed of TPMLWG, to determine the structure parameters of the TPMLWG. The new secondary structure of the TPMLWG contains surface-mounted permanent magnets and the interior permanent magnets, which form a series-parallel hybrid magnetic circuit, and their reasonable structure parameters are designed to get the optimum pole-arc coefficient. The electromagnetic field and temperature field of TPMLWG are analyzed using finite element method. It can be included that the sinusoidal characteristics of air gap flux density of the new secondary structure TPMLWG are improved, the cogging force as well as mechanical vibration is reduced in the process of operation, and the stable temperature rise of generator meets the design requirements when adopting the new secondary structure of the TPMLWG. PMID:25050388
A nonlinear Kalman filtering approach to embedded control of turbocharged diesel engines
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan
2014-10-01
The development of efficient embedded control for turbocharged Diesel engines, requires the programming of elaborated nonlinear control and filtering methods. To this end, in this paper nonlinear control for turbocharged Diesel engines is developed with the use of Differential flatness theory and the Derivative-free nonlinear Kalman Filter. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances the Derivative-free nonlinear Kalman Filter is used and redesigned as a disturbance observer. The filter consists of the Kalman Filter recursion on the linearized equivalent of the Diesel engine model and of an inverse transformation based on differential flatness theory which enables to obtain estimates for the state variables of the initial nonlinear model. Once the disturbances variables are identified it is possible to compensate them by including an additional control term in the feedback loop. The efficiency of the proposed control method is tested through simulation experiments.
Design and analysis of tubular permanent magnet linear wave generator.
Si, Jikai; Feng, Haichao; Su, Peng; Zhang, Lufeng
2014-01-01
Due to the lack of mature design program for the tubular permanent magnet linear wave generator (TPMLWG) and poor sinusoidal characteristics of the air gap flux density for the traditional surface-mounted TPMLWG, a design method and a new secondary structure of TPMLWG are proposed. An equivalent mathematical model of TPMLWG is established to adopt the transformation relationship between the linear velocity of permanent magnet rotary generator and the operating speed of TPMLWG, to determine the structure parameters of the TPMLWG. The new secondary structure of the TPMLWG contains surface-mounted permanent magnets and the interior permanent magnets, which form a series-parallel hybrid magnetic circuit, and their reasonable structure parameters are designed to get the optimum pole-arc coefficient. The electromagnetic field and temperature field of TPMLWG are analyzed using finite element method. It can be included that the sinusoidal characteristics of air gap flux density of the new secondary structure TPMLWG are improved, the cogging force as well as mechanical vibration is reduced in the process of operation, and the stable temperature rise of generator meets the design requirements when adopting the new secondary structure of the TPMLWG.
NASA Astrophysics Data System (ADS)
Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2018-06-01
In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
Singular optimal control and the identically non-regular problem in the calculus of variations
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.
1985-01-01
A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.
A Complete Multimode Equivalent-Circuit Theory for Electrical Design
Williams, Dylan F.; Hayden, Leonard A.; Marks, Roger B.
1997-01-01
This work presents a complete equivalent-circuit theory for lossy multimode transmission lines. Its voltages and currents are based on general linear combinations of standard normalized modal voltages and currents. The theory includes new expressions for transmission line impedance matrices, symmetry and lossless conditions, source representations, and the thermal noise of passive multiports. PMID:27805153
NASA Astrophysics Data System (ADS)
Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry
2018-06-01
This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.
2011-01-01
The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.
Fast calculation of the `ILC norm' in iterative learning control
NASA Astrophysics Data System (ADS)
Rice, Justin K.; van Wingerden, Jan-Willem
2013-06-01
In this paper, we discuss and demonstrate a method for the exploitation of matrix structure in computations for iterative learning control (ILC). In Barton, Bristow, and Alleyne [International Journal of Control, 83(2), 1-8 (2010)], a special insight into the structure of the lifted convolution matrices involved in ILC is used along with a modified Lanczos method to achieve very fast computational bounds on the learning convergence, by calculating the 'ILC norm' in ? computational complexity. In this paper, we show how their method is equivalent to a special instance of the sequentially semi-separable (SSS) matrix arithmetic, and thus can be extended to many other computations in ILC, and specialised in some cases to even faster methods. Our SSS-based methodology will be demonstrated on two examples: a linear time-varying example resulting in the same ? complexity as in Barton et al., and a linear time-invariant example where our approach reduces the computational complexity to ?, thus decreasing the computation time, for an example, from the literature by a factor of almost 100. This improvement is achieved by transforming the norm computation via a linear matrix inequality into a check of positive definiteness - which allows us to further exploit the almost-Toeplitz properties of the matrix, and additionally provides explicit upper and lower bounds on the norm of the matrix, instead of the indirect Ritz estimate. These methods are now implemented in a MATLAB toolbox, freely available on the Internet.
Study on Standard Fatigue Vehicle Load Model
NASA Astrophysics Data System (ADS)
Huang, H. Y.; Zhang, J. P.; Li, Y. H.
2018-02-01
Based on the measured data of truck from three artery expressways in Guangdong Province, the statistical analysis of truck weight was conducted according to axle number. The standard fatigue vehicle model applied to industrial areas in the middle and late was obtained, which adopted equivalence damage principle, Miner linear accumulation law, water discharge method and damage ratio theory. Compared with the fatigue vehicle model Specified by the current bridge design code, the proposed model has better applicability. It is of certain reference value for the fatigue design of bridge in China.
On the existence of mosaic-skeleton approximations for discrete analogues of integral operators
NASA Astrophysics Data System (ADS)
Kashirin, A. A.; Taltykina, M. Yu.
2017-09-01
Exterior three-dimensional Dirichlet problems for the Laplace and Helmholtz equations are considered. By applying methods of potential theory, they are reduced to equivalent Fredholm boundary integral equations of the first kind, for which discrete analogues, i.e., systems of linear algebraic equations (SLAEs) are constructed. The existence of mosaic-skeleton approximations for the matrices of the indicated systems is proved. These approximations make it possible to reduce the computational complexity of an iterative solution of the SLAEs. Numerical experiments estimating the capabilities of the proposed approach are described.
Independent contrasts and PGLS regression estimators are equivalent.
Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary
2012-05-01
We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.
NASA Astrophysics Data System (ADS)
Zhang, Xi; Lu, Jinling; Yuan, Shifei; Yang, Jun; Zhou, Xuan
2017-03-01
This paper proposes a novel parameter identification method for the lithium-ion (Li-ion) battery equivalent circuit model (ECM) considering the electrochemical properties. An improved pseudo two-dimension (P2D) model is established on basis of partial differential equations (PDEs), since the electrolyte potential is simplified from the nonlinear to linear expression while terminal voltage can be divided into the electrolyte potential, open circuit voltage (OCV), overpotential of electrodes, internal resistance drop, and so on. The model order reduction process is implemented by the simplification of the PDEs using the Laplace transform, inverse Laplace transform, Pade approximation, etc. A unified second order transfer function between cell voltage and current is obtained for the comparability with that of ECM. The final objective is to obtain the relationship between the ECM resistances/capacitances and electrochemical parameters such that in various conditions, ECM precision could be improved regarding integration of battery interior properties for further applications, e.g., SOC estimation. Finally simulation and experimental results prove the correctness and validity of the proposed methodology.
Linear and nonlinear equivalent circuit modeling of CMUTs.
Lohfink, Annette; Eccardt, Peter-Christian
2005-12-01
Using piston radiator and plate capacitance theory capacitive micromachined ultrasound transducers (CMUT) membrane cells can be described by one-dimensional (1-D) model parameters. This paper describes in detail a new method, which derives a 1-D model for CMUT arrays from finite-element methods (FEM) simulations. A few static and harmonic FEM analyses of a single CMUT membrane cell are sufficient to derive the mechanical and electrical parameters of an equivalent piston as the moving part of the cell area. For an array of parallel-driven cells, the acoustic parameters are derived as a complex mechanical fluid impedance, depending on the membrane shape form. As a main advantage, the nonlinear behavior of the CMUT can be investigated much easier and faster compared to FEM simulations, e.g., for a design of the maximum applicable voltage depending on the input signal. The 1-D parameter model allows an easy description of the CMUT behavior in air and fluids and simplifies the investigation of wave propagation within the connecting fluid represented by FEM or transmission line matrix (TLM) models.
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1998-01-01
This paper contains a study of two methods for use in a generic nonlinear simulation tool that could be used to determine achievable control dynamics and control power requirements while performing perfect tracking maneuvers over the entire flight envelope. The two methods are NDI (nonlinear dynamic inversion) and the SOFFT(Stochastic Optimal Feedforward and Feedback Technology) feedforward control structure. Equivalent discrete and continuous SOFFT feedforward controllers have been developed. These equivalent forms clearly show that the closed-loop plant model loop is a plant inversion and is the same as the NDI formulation. The main difference is that the NDI formulation has a closed-loop controller structure whereas SOFFT uses an open-loop command model. Continuous, discrete, and hybrid controller structures have been developed and integrated into the formulation. Linear simulation results show that seven different configurations all give essentially the same response, with the NDI hybrid being slightly different. The SOFFT controller gave better tracking performance compared to the NDI controller when a nonlinear saturation element was added. Future plans include evaluation using a nonlinear simulation.
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Linear strain sensor made of multi-walled carbon nanotube/epoxy composite
NASA Astrophysics Data System (ADS)
Tong, Shuying; Yuan, Weifeng; Liu, Haidong; Alamusi; Hu, Ning; Zhao, Chaoyang; Zhao, Yangzhou
2017-11-01
In this study, a fabrication process was developed to make the multi-walled carbon nanotubes/epoxy (MWCNT/EP) composite films. The electrical-strain behaviour of the films in direct and alternating current circuits were both tested. It is found that the direct current resistance and the dielectric loss tangent of the MWCNT/EP composite films are dependent on the strain and the weight fraction of the carbon nanotubes. In an alternating current circuit, the test frequency affects the impedance and the phase angle of the composite film, but it has nothing to do with the change ratio of the dielectric loss tangent of the film in tension. This phenomenon can be interpreted by a proposed equivalent circuit model. Experiment results show that the change rate of the dielectric loss tangent of the MWCNT/EP sensor is linearly proportional to the strain. The findings obtained in the present study provide a promising method to develop ultrasensitive linear strain gauges.
NASA Astrophysics Data System (ADS)
Lipscomb, Dawn; Echchgadda, Ibtissam; Peralta, Xomalin G.; Wilmink, Gerald J.
2013-02-01
Terahertz time-domain spectroscopy (THz-TDS) methods have been utilized in previous studies in order to characterize the optical properties of skin and its primary constituents (i.e., water, collagen, and keratin). However, similar experiments have not yet been performed to investigate whether melanocytes and the melanin pigment that they synthesize contribute to skin's optical properties. In this study, we used THz-TDS methods operating in transmission geometry to measure the optical properties of in vitro human skin equivalents with or without normal human melanocytes. Skin equivalents were cultured for three weeks to promote gradual melanogenesis, and THz time domain data were collected at various time intervals. Frequency-domain analysis techniques were performed to determine the index of refraction (n) and absorption coefficient (μa) for each skin sample over the frequency range of 0.1-2.0 THz. We found that for all samples as frequency increased, n decreased exponentially and the μa increased linearly. Additionally, we observed that skin samples with higher levels of melanin exhibited greater n and μa values than the non-pigmented samples. Our results indicate that melanocytes and the degree of melanin pigmentation contribute in an appreciable manner to the skin's optical properties. Future studies will be performed to examine whether these contributions are observed in human skin in vivo.
Limberg, Brian J; Johnstone, Kevin; Filloon, Thomas; Catrenich, Carl
2016-09-01
Using United States Pharmacopeia-National Formulary (USP-NF) general method <1223> guidance, the Soleris(®) automated system and reagents (Nonfermenting Total Viable Count for bacteria and Direct Yeast and Mold for yeast and mold) were validated, using a performance equivalence approach, as an alternative to plate counting for total microbial content analysis using five representative microbes: Staphylococcus aureus, Bacillus subtilis, Pseudomonas aeruginosa, Candida albicans, and Aspergillus brasiliensis. Detection times (DTs) in the alternative automated system were linearly correlated to CFU/sample (R(2) = 0.94-0.97) with ≥70% accuracy per USP General Chapter <1223> guidance. The LOD and LOQ of the automated system were statistically similar to the traditional plate count method. This system was significantly more precise than plate counting (RSD 1.2-2.9% for DT, 7.8-40.6% for plate counts), was statistically comparable to plate counting with respect to variations in analyst, vial lots, and instruments, and was robust when variations in the operating detection thresholds (dTs; ±2 units) were used. The automated system produced accurate results, was more precise and less labor-intensive, and met or exceeded criteria for a valid alternative quantitative method, consistent with USP-NF general method <1223> guidance.
Dietz, Hans Peter; D’hooge, Jan; Barratt, Dean; Deprest, Jan
2018-01-01
Abstract. Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams’ index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach. PMID:29340289
Bonmati, Ester; Hu, Yipeng; Sindhwani, Nikhil; Dietz, Hans Peter; D'hooge, Jan; Barratt, Dean; Deprest, Jan; Vercauteren, Tom
2018-04-01
Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.
Navarrete-Benlloch, Carlos; Roldán, Eugenio; Chang, Yue; Shi, Tao
2014-10-06
Nonlinear optical cavities are crucial both in classical and quantum optics; in particular, nowadays optical parametric oscillators are one of the most versatile and tunable sources of coherent light, as well as the sources of the highest quality quantum-correlated light in the continuous variable regime. Being nonlinear systems, they can be driven through critical points in which a solution ceases to exist in favour of a new one, and it is close to these points where quantum correlations are the strongest. The simplest description of such systems consists in writing the quantum fields as the classical part plus some quantum fluctuations, linearizing then the dynamical equations with respect to the latter; however, such an approach breaks down close to critical points, where it provides unphysical predictions such as infinite photon numbers. On the other hand, techniques going beyond the simple linear description become too complicated especially regarding the evaluation of two-time correlators, which are of major importance to compute observables outside the cavity. In this article we provide a regularized linear description of nonlinear cavities, that is, a linearization procedure yielding physical results, taking the degenerate optical parametric oscillator as the guiding example. The method, which we call self-consistent linearization, is shown to be equivalent to a general Gaussian ansatz for the state of the system, and we compare its predictions with those obtained with available exact (or quasi-exact) methods. Apart from its operational value, we believe that our work is valuable also from a fundamental point of view, especially in connection to the question of how far linearized or Gaussian theories can be pushed to describe nonlinear dissipative systems which have access to non-Gaussian states.
Observed Score Linear Equating with Covariates
ERIC Educational Resources Information Center
Branberg, Kenny; Wiberg, Marie
2011-01-01
This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…
Bisimulation equivalence of differential-algebraic systems
NASA Astrophysics Data System (ADS)
Megawati, Noorma Yulia; Schaft, Arjan van der
2018-01-01
In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.
Rigatos, Gerasimos G
2016-06-01
It is proven that the model of the p53-mdm2 protein synthesis loop is a differentially flat one and using a diffeomorphism (change of state variables) that is proposed by differential flatness theory it is shown that the protein synthesis model can be transformed into the canonical (Brunovsky) form. This enables the design of a feedback control law that maintains the concentration of the p53 protein at the desirable levels. To estimate the non-measurable elements of the state vector describing the p53-mdm2 system dynamics, the derivative-free non-linear Kalman filter is used. Moreover, to compensate for modelling uncertainties and external disturbances that affect the p53-mdm2 system, the derivative-free non-linear Kalman filter is re-designed as a disturbance observer. The derivative-free non-linear Kalman filter consists of the Kalman filter recursion applied on the linearised equivalent of the protein synthesis model together with an inverse transformation based on differential flatness theory that enables to retrieve estimates for the state variables of the initial non-linear model. The proposed non-linear feedback control and perturbations compensation method for the p53-mdm2 system can result in more efficient chemotherapy schemes where the infusion of medication will be better administered.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G
2004-02-01
Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.
Optimal control of LQR for discrete time-varying systems with input delays
NASA Astrophysics Data System (ADS)
Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng
2018-04-01
In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.
Preserving Symmetry in Preconditioned Krylov Subspace Methods
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Chow, E.; Saad, Y.; Yeung, M. C.
1996-01-01
We consider the problem of solving a linear system Ax = b when A is nearly symmetric and when the system is preconditioned by a symmetric positive definite matrix M. In the symmetric case, one can recover symmetry by using M-inner products in the conjugate gradient (CG) algorithm. This idea can also be used in the nonsymmetric case, and near symmetry can be preserved similarly. Like CG, the new algorithms are mathematically equivalent to split preconditioning, but do not require M to be factored. Better robustness in a specific sense can also be observed. When combined with truncated versions of iterative methods, tests show that this is more effective than the common practice of forfeiting near-symmetry altogether.
He, Jiangnan; Lu, Lina; He, Xiangui; Xu, Xian; Du, Xuan; Zhang, Bo; Zhao, Huijuan; Sha, Jida; Zhu, Jianfeng; Zou, Haidong; Xu, Xun
2017-01-01
Purpose To report calculated crystalline lens power and describe the distribution of ocular biometry and its association with refractive error in older Chinese adults. Methods Random clustering sampling was used to identify adults aged 50 years and above in Xuhui and Baoshan districts of Shanghai. Refraction was determined by subjective refraction that achieved the best corrected vision based on monocular measurement. Ocular biometry was measured by IOL Master. The crystalline lens power of right eyes was calculated using modified Bennett-Rabbetts formula. Results We analyzed 6099 normal phakic right eyes. The mean crystalline lens power was 20.34 ± 2.24D (range: 13.40–36.08). Lens power, spherical equivalent, and anterior chamber depth changed linearly with age; however, axial length, corneal power and AL/CR ratio did not vary with age. The overall prevalence of hyperopia, myopia, and high myopia was 48.48% (95% CI: 47.23%–49.74%), 22.82% (95% CI: 21.77%–23.88%), and 4.57% (95% CI: 4.05–5.10), respectively. The prevalence of hyperopia increased linearly with age while lens power decreased with age. In multivariate models, refractive error was strongly correlated with axial length, lens power, corneal power, and anterior chamber depth; refractive error was slightly correlated with best corrected visual acuity, age and sex. Conclusion Lens power, hyperopia, and spherical equivalent changed linearly with age; Moreover, the continuous loss of lens power produced hyperopic shifts in refraction in subjects aged more than 50 years. PMID:28114313
Gasser, T C; Nchimi, A; Swedenborg, J; Roy, J; Sakalihasan, N; Böckler, D; Hyhlik-Dürr, A
2014-03-01
To translate the individual abdominal aortic aneurysm (AAA) patient's biomechanical rupture risk profile to risk-equivalent diameters, and to retrospectively test their predictability in ruptured and non-ruptured aneurysms. Biomechanical parameters of ruptured and non-ruptured AAAs were retrospectively evaluated in a multicenter study. General patient data and high resolution computer tomography angiography (CTA) images from 203 non-ruptured and 40 ruptured aneurysmal infrarenal aortas. Three-dimensional AAA geometries were semi-automatically derived from CTA images. Finite element (FE) models were used to predict peak wall stress (PWS) and peak wall rupture index (PWRI) according to the individual anatomy, gender, blood pressure, intra-luminal thrombus (ILT) morphology, and relative aneurysm expansion. Average PWS diameter and PWRI diameter responses were evaluated, which allowed for the PWS equivalent and PWRI equivalent diameters for any individual aneurysm to be defined. PWS increased linearly and PWRI exponentially with respect to maximum AAA diameter. A size-adjusted analysis showed that PWS equivalent and PWRI equivalent diameters were increased by 7.5 mm (p = .013) and 14.0 mm (p < .001) in ruptured cases when compared to non-ruptured controls, respectively. In non-ruptured cases the PWRI equivalent diameters were increased by 13.2 mm (p < .001) in females when compared with males. Biomechanical parameters like PWS and PWRI allow for a highly individualized analysis by integrating factors that influence the risk of AAA rupture like geometry (degree of asymmetry, ILT morphology, etc.) and patient characteristics (gender, family history, blood pressure, etc.). PWRI and the reported annual risk of rupture increase similarly with the diameter. PWRI equivalent diameter expresses the PWRI through the diameter of the average AAA that has the same PWRI, i.e. is at the same biomechanical risk of rupture. Consequently, PWRI equivalent diameter facilitates a straightforward interpretation of biomechanical analysis and connects to diameter-based guidelines for AAA repair indication. PWRI equivalent diameter reflects an additional diagnostic parameter that may provide more accurate clinical data for AAA repair indication. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Koda, Shin-ichi
2015-05-28
It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2002-01-01
Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.
Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.
Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z
Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.
Bezombes, Lucie; Gaucherand, Stéphanie; Kerbiriou, Christian; Reinert, Marie-Eve; Spiegelberger, Thomas
2017-08-01
In many countries, biodiversity compensation is required to counterbalance negative impacts of development projects on biodiversity by carrying out ecological measures, called offset when the goal is to reach "no net loss" of biodiversity. One main issue is to ensure that offset gains are equivalent to impact-related losses. Ecological equivalence is assessed with ecological equivalence assessment methods taking into account a range of key considerations that we summarized as ecological, spatial, temporal, and uncertainty. When equivalence assessment methods take into account all considerations, we call them "comprehensive". Equivalence assessment methods should also aim to be science-based and operational, which is challenging. Many equivalence assessment methods have been developed worldwide but none is fully satisfying. In the present study, we examine 13 equivalence assessment methods in order to identify (i) their general structure and (ii) the synergies and trade-offs between equivalence assessment methods characteristics related to operationality, scientific-basis and comprehensiveness (called "challenges" in his paper). We evaluate each equivalence assessment methods on the basis of 12 criteria describing the level of achievement of each challenge. We observe that all equivalence assessment methods share a general structure, with possible improvements in the choice of target biodiversity, the indicators used, the integration of landscape context and the multipliers reflecting time lags and uncertainties. We show that no equivalence assessment methods combines all challenges perfectly. There are trade-offs between and within the challenges: operationality tends to be favored while scientific basis are integrated heterogeneously in equivalence assessment methods development. One way of improving the challenges combination would be the use of offset dedicated data-bases providing scientific feedbacks on previous offset measures.
NASA Astrophysics Data System (ADS)
Hälg, R. A.; Besserer, J.; Boschung, M.; Mayer, S.; Lomax, A. J.; Schneider, U.
2014-05-01
In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.
Hälg, R A; Besserer, J; Boschung, M; Mayer, S; Lomax, A J; Schneider, U
2014-05-21
In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.
NASA Astrophysics Data System (ADS)
Brekke, Stewart
2010-11-01
Originally Einstein proposed the the mass-energy equivalence at low speeds as E=mc^2 + 1/2 mv^2. However, a mass may also be rotating and vibrating as well as moving linearly. Although small, these kinetic energies must be included in formulating a true mathematical statement of the mass-energy equivalence. Also, gravitational, electromagneic and magnetic potential energies must be included in the mass-energy equivalence mathematical statement. While the kinetic energy factors may differ in each physical situation such as types of vibrations and rotations, the basic equation for the mass- energy equivalence is therefore E = m0c^2 + 1/2m0v^2 + 1/2I2̂+ 1/2kx^2 + WG+ WE+ WM.
Quality factor and dose equivalent investigations aboard the Soviet Space Station Mir
NASA Astrophysics Data System (ADS)
Bouisset, P.; Nguyen, V. D.; Parmentier, N.; Akatov, Ia. A.; Arkhangel'Skii, V. V.; Vorozhtsov, A. S.; Petrov, V. M.; Kovalev, E. E.; Siegrist, M.
1992-07-01
Since Dec 1988, date of the French-Soviet joint space mission 'ARAGATZ', the CIRCE device, had recorded dose equivalent and quality factor values inside the Mir station (380-410 km, 51.5 deg). After the initial gas filling two years ago, the low pressure tissue equivalent proportional counter is still in good working conditions. Some results of three periods are presented. The average dose equivalent rates measured are respectively 0.6, 0.8 and 0.6 mSv/day with a quality factor equal to 1.9. Some detailed measurements show the increasing of the dose equivalent rates through the SAA and near polar horns. The real time determination of the quality factors allows to point out high linear energy transfer events with quality factors in the range 10-20.
NASA Astrophysics Data System (ADS)
Majidinejad, A.; Zafarani, H.; Vahdani, S.
2018-05-01
The North Tehran fault (NTF) is known to be one of the most drastic sources of seismic hazard on the city of Tehran. In this study, we provide broad-band (0-10 Hz) ground motions for the city as a consequence of probable M7.2 earthquake on the NTF. Low-frequency motions (0-2 Hz) are provided from spectral element dynamic simulation of 17 scenario models. High-frequency (2-10 Hz) motions are calculated with a physics-based method based on S-to-S backscattering theory. Broad-band ground motions at the bedrock level show amplifications, both at low and high frequencies, due to the existence of deep Tehran basin in the vicinity of the NTF. By employing soil profiles obtained from regional studies, effect of shallow soil layers on broad-band ground motions is investigated by both linear and non-linear analyses. While linear soil response overestimate ground motion prediction equations, non-linear response predicts plausible results within one standard deviation of empirical relationships. Average Peak Ground Accelerations (PGAs) at the northern, central and southern parts of the city are estimated about 0.93, 0.59 and 0.4 g, respectively. Increased damping caused by non-linear soil behaviour, reduces the soil linear responses considerably, in particular at frequencies above 3 Hz. Non-linear deamplification reduces linear spectral accelerations up to 63 per cent at stations above soft thick sediments. By performing more general analyses, which exclude source-to-site effects on stations, a correction function is proposed for typical site classes of Tehran. Parameters for the function which reduces linear soil response in order to take into account non-linear soil deamplification are provided for various frequencies in the range of engineering interest. In addition to fully non-linear analyses, equivalent-linear calculations were also conducted which their comparison revealed appropriateness of the method for large peaks and low frequencies, but its shortage for small to medium peaks and motions with higher than 3 Hz frequencies.
Relationship of the actual thick intraocular lens optic to the thin lens equivalent.
Holladay, J T; Maverick, K J
1998-09-01
To theoretically derive and empirically validate the relationship between the actual thick intraocular lens and the thin lens equivalent. Included in the study were 12 consecutive adult patients ranging in age from 54 to 84 years (mean +/- SD, 73.5 +/- 9.4 years) with best-corrected visual acuity better than 20/40 in each eye. Each patient had bilateral intraocular lens implants of the same style, placed in the same location (bag or sulcus) by the same surgeon. Preoperatively, axial length, keratometry, refraction, and vertex distance were measured. Postoperatively, keratometry, refraction, vertex distance, and the distance from the vertex of the cornea to the anterior vertex of the intraocular lens (AV(PC1)) were measured. Alternatively, the distance (AV(PC1)) was then back-calculated from the vergence formula used for intraocular lens power calculations. The average (+/-SD) of the absolute difference in the two methods was 0.23 +/- 0.18 mm, which would translate to approximately 0.46 diopters. There was no statistical difference between the measured and calculated values; the Pearson product-moment correlation coefficient from linear regression was 0.85 (r2 = .72, F = 56). The average intereye difference was -0.030 mm (SD, 0.141 mm; SEM, 0.043 mm) using the measurement method and +0.124 mm (SD, 0.412 mm; SEM, 0.124 mm) using the calculation method. The relationship between the actual thick intraocular lens and the thin lens equivalent has been determined theoretically and demonstrated empirically. This validation provides the manufacturer and surgeon additional confidence and utility for lens constants used in intraocular lens power calculations.
Jacchia, Sara; Nardini, Elena; Bassani, Niccolò; Savini, Christian; Shim, Jung-Hyun; Trijatmiko, Kurniawan; Kreysa, Joachim; Mazzara, Marco
2015-05-27
This article describes the international validation of the quantitative real-time polymerase chain reaction (PCR) detection method for Golden Rice 2. The method consists of a taxon-specific assay amplifying a fragment of rice Phospholipase D α2 gene, and an event-specific assay designed on the 3' junction between transgenic insert and plant DNA. We validated the two assays independently, with absolute quantification, and in combination, with relative quantification, on DNA samples prepared in haploid genome equivalents. We assessed trueness, precision, efficiency, and linearity of the two assays, and the results demonstrate that both the assays independently assessed and the entire method fulfill European and international requirements for methods for genetically modified organism (GMO) testing, within the dynamic range tested. The homogeneity of the results of the collaborative trial between Europe and Asia is a good indicator of the robustness of the method.
Segmented and "equivalent" representation of the cable equation.
Andrietti, F; Bernardini, G
1984-11-01
The linear cable theory has been applied to a modular structure consisting of n repeating units each composed of two subunits with different values of resistance and capacitance. For n going to infinity, i.e., for infinite cables, we have derived analytically the Laplace transform of the solution by making use of a difference method and we have inverted it by means of a numerical procedure. The results have been compared with those obtained by the direct application of the cable equation to a simplified nonmodular model with "equivalent" electrical parameters. The implication of our work in the analysis of the time and space course of the potential of real fibers has been discussed. In particular, we have shown that the simplified ("equivalent") model is a very good representation of the segmented model for the nodal regions of myelinated fibers in a steady situation and in every condition for muscle fibers. An approximate solution for the steady potential of myelinated fibers has been derived for both nodal and internodal regions. The applications of our work to other cases dealing with repeating structures, such as earthworm giant fibers, have been discussed and our results have been compared with other attempts to solve similar problems.
Observations on personnel dosimetry for radiotherapy personnel operating high-energy LINACs.
Glasgow, G P; Eichling, J; Yoder, R C
1986-06-01
A series of measurements were conducted to determine the cause of a sudden increase in personnel radiation exposures. One objective of the measurements was to determine if the increases were related to changing from film dosimeters exchanged monthly to TLD-100 dosimeters exchanged quarterly. While small increases were observed in the dose equivalents of most employees, the dose equivalents of personnel operating medical electron linear accelerators with energies greater than 20 MV doubled coincidentally with the change in the personnel dosimeter program. The measurements indicated a small thermal neutron radiation component around the accelerators operated by these personnel. This component caused the doses measured with the TLD-100 dosimeters to be overstated. Therefore, the increase in these personnel dose equivalents was not due to changes in work habits or radiation environments. Either film or TLD-700 dosimeters would be suitable for personnel monitoring around high-energy linear accelerators. The final choice would depend on economics and personal preference.
Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics
NASA Astrophysics Data System (ADS)
Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.
2018-04-01
We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves < 20 Hz) from volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.
Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F
2010-07-01
Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.
Guo, Ting; Winterburn, Julie L.; Pipitone, Jon; Duerden, Emma G.; Park, Min Tae M.; Chau, Vann; Poskitt, Kenneth J.; Grunau, Ruth E.; Synnes, Anne; Miller, Steven P.; Mallar Chakravarty, M.
2015-01-01
Introduction The hippocampus, a medial temporal lobe structure central to learning and memory, is particularly vulnerable in preterm-born neonates. To date, segmentation of the hippocampus for preterm-born neonates has not yet been performed early-in-life (shortly after birth when clinically stable). The present study focuses on the development and validation of an automatic segmentation protocol that is based on the MAGeT-Brain (Multiple Automatically Generated Templates) algorithm to delineate the hippocampi of preterm neonates on their brain MRIs acquired at not only term-equivalent age but also early-in-life. Methods First, we present a three-step manual segmentation protocol to delineate the hippocampus for preterm neonates and apply this protocol on 22 early-in-life and 22 term images. These manual segmentations are considered the gold standard in assessing the automatic segmentations. MAGeT-Brain, automatic hippocampal segmentation pipeline, requires only a small number of input atlases and reduces the registration and resampling errors by employing an intermediate template library. We assess the segmentation accuracy of MAGeT-Brain in three validation studies, evaluate the hippocampal growth from early-in-life to term-equivalent age, and study the effect of preterm birth on the hippocampal volume. The first experiment thoroughly validates MAGeT-Brain segmentation in three sets of 10-fold Monte Carlo cross-validation (MCCV) analyses with 187 different groups of input atlases and templates. The second experiment segments the neonatal hippocampi on 168 early-in-life and 154 term images and evaluates the hippocampal growth rate of 125 infants from early-in-life to term-equivalent age. The third experiment analyzes the effect of gestational age (GA) at birth on the average hippocampal volume at early-in-life and term-equivalent age using linear regression. Results The final segmentations demonstrate that MAGeT-Brain consistently provides accurate segmentations in comparison to manually derived gold standards (mean Dice's Kappa > 0.79 and Euclidean distance <1.3 mm between centroids). Using this method, we demonstrate that the average volume of the hippocampus is significantly different (p < 0.0001) in early-in-life (621.8 mm3) and term-equivalent age (958.8 mm3). Using these differences, we generalize the hippocampal growth rate to 38.3 ± 11.7 mm3/week and 40.5 ± 12.9 mm3/week for the left and right hippocampi respectively. Not surprisingly, younger gestational age at birth is associated with smaller volumes of the hippocampi (p = 0.001). Conclusions MAGeT-Brain is capable of segmenting hippocampi accurately in preterm neonates, even at early-in-life. Hippocampal asymmetry with a larger right side is demonstrated on early-in-life images, suggesting that this phenomenon has its onset in the 3rd trimester of gestation. Hippocampal volume assessed at the time of early-in-life and term-equivalent age is linearly associated with GA at birth, whereby smaller volumes are associated with earlier birth. PMID:26740912
Power Supply Fault Tolerant Reliability Study
1991-04-01
easier to design than for equivalent bipolar transistors. MCDONNELL DOUGLAS ELECTRONICS SYSTEMS COMPANY 9. Base circuitry should be designed to drive...SWITCHING REGULATORS (Ref. 28), SWITCHING AND LINEAR POWER SUPPLY DESIGN (Ref. 25) 6. Sequence the turn-off/turn-on logic in an orderly and controllable ...for equivalent bipolar transistors. MCDONNELL DOUGLAS ELECTRONICS SYSTEMS COMPANY 8. Base circuitry should be designed to drive the transistor into
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghorbani, M; Tabatabaei, Z; Noghreiyan, A Vejdani
Purpose: The aim of this study is to evaluate soft tissue composition effect on dose distribution for various soft tissues and various depths in radiotherapy with 6 MV photon beam of a medical linac. Methods: A phantom and Siemens Primus linear accelerator were simulated using MCNPX Monte Carlo code. In a homogeneous cubic phantom, six types of soft tissue and three types of tissue-equivalent materials were defined separately. The soft tissues were muscle (skeletal), adipose tissue, blood (whole), breast tissue, soft tissue (9-component) and soft tissue (4-component). The tissue-equivalent materials included: water, A-150 tissue-equivalent plastic and perspex. Photon dose relativemore » to dose in 9-component soft tissue at various depths on the beam’s central axis was determined for the 6 MV photon beam. The relative dose was also calculated and compared for various MCNPX tallies including,F8, F6 and,F4. Results: The results of the relative photon dose in various materials relative to dose in 9-component soft tissue and using different tallies are reported in the form of tabulated data. Minor differences between dose distributions in various soft tissues and tissue-equivalent materials were observed. The results from F6 and F4 were practically the same but different with,F8 tally. Conclusion: Based on the calculations performed, the differences in dose distributions in various soft tissues and tissue-equivalent materials are minor but they could be corrected in radiotherapy calculations to upgrade the accuracy of the dosimetric calculations.« less
Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786
Computational Aeroacoustics by the Space-time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2001-01-01
In recent years, a new numerical methodology for conservation laws-the Space-Time Conservation Element and Solution Element Method (CE/SE), was developed by Dr. Chang of NASA Glenn Research Center and collaborators. In nature, the new method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its rigorous treatment of the fluxes and geometry, it is different from the existing schemes. The CE/SE scheme features: (1) space and time treated on the same footing, the integral equations of conservation laws are solve( for with second order accuracy, (2) high resolution, low dispersion and low dissipation, (3) novel, truly multi-dimensional, simple but effective non-reflecting boundary condition, (4) effortless implementation of computation, no numerical fix or parameter choice is needed, an( (5) robust enough to cover a wide spectrum of compressible flow: from weak linear acoustic waves to strong, discontinuous waves (shocks) appropriate for linear and nonlinear aeroacoustics. Currently, the CE/SE scheme has been developed to such a stage that a 3-13 unstructured CE/SE Navier-Stokes solver is already available. However, in the present paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen as a prototype and is sketched in Section 2. Then applications of the CE/SE scheme to linear, nonlinear aeroacoustics and airframe noise are depicted in Sections 3, 4, and 5 respectively to demonstrate its robustness and capability.
Robust synthetic biology design: stochastic game theory approach.
Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching
2009-07-15
Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi-Sugeno (T-S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf.
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L. W.; Franceschetti, A.; Zunger, A.
We have developed a "linear combination of bulk bands" method that permits atomistic, pseudopotential electronic structure calculations for .apprx.10(sup6) atom nanostructures. Application to (GaAs)n/A1As)n (001) superlattices (SL's) reveals even-odd oscillations in the ..Gamma..-X coupling magnitude Vrx(n), which vanishes for n = odd, even for abrupt and segregated SL's, respectively. Surprisingly, in contrast with recent expectations, OD quantum dots are found here to have a smaller ..Gamma..-X coupling than equivalent 2D SL's. Our analysis shows that for large quantum dots this is largely due to the existence of level repulsion from many X states.
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
Analysis of phenolic compounds in Matricaria chamomilla and its extracts by UPLC-UV
Haghi, G.; Hatami, A.; Safaei, A.; Mehran, M.
2014-01-01
Chamomile (Matricaria chamomilla L.) is a widely used medicinal plant possessing several pharmacological effects due to presence of active compounds. This study describes a method of using ultra performance liquid chromatography (UPLC) coupled with photodiode array (PDA) detector for the separation of phenolic compounds in M. chamomilla and its crude extracts. Separation was conducted on C18 column (150 mm × 2 mm, 1.8 μm) using a gradient elution with a mobile phase consisting of acetonitrile and 4% aqueous acetic acid at 25°C. The method proposed was validated for determination of free and total apigenin and apigenin 7-glucoside contents as bioactive compounds in the extracts by testing sensitivity, linearity, precision and recovery. In general, UPLC produced significant improvements in method sensitivity, speed and resolution. Extraction was performed with methanol, 70% aqueous ethanol and water solvents. Total phenolic and total flavonoid contents ranged from 1.77 to 50.75 gram (g) of gallic acid equivalent (GAE)/100 g and 0.82 to 36.75 g quercetin equivalent (QE)/100 g in dry material, respectively. There was a considerable difference from 40 to 740 mg/100 g for apigenin and 210 to 1110 mg/100 g for apigenin 7-glucoside in dry material. PMID:25598797
Variable structure control of nonlinear systems through simplified uncertain models
NASA Technical Reports Server (NTRS)
Sira-Ramirez, Hebertt
1986-01-01
A variable structure control approach is presented for the robust stabilization of feedback equivalent nonlinear systems whose proposed model lies in the same structural orbit of a linear system in Brunovsky's canonical form. An attempt to linearize exactly the nonlinear plant on the basis of the feedback control law derived for the available model results in a nonlinearly perturbed canonical system for the expanded class of possible equivalent control functions. Conservatism tends to grow as modeling errors become larger. In order to preserve the internal controllability structure of the plant, it is proposed that model simplification be carried out on the open-loop-transformed system. As an example, a controller is developed for a single link manipulator with an elastic joint.
NASA Astrophysics Data System (ADS)
Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.
2012-03-01
The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.
Simple taper: Taper equations for the field forester
David R. Larsen
2017-01-01
"Simple taper" is set of linear equations that are based on stem taper rates; the intent is to provide taper equation functionality to field foresters. The equation parameters are two taper rates based on differences in diameter outside bark at two points on a tree. The simple taper equations are statistically equivalent to more complex equations. The linear...
Millar, W T; Davidson, S E
2013-01-01
Objective: To consider the implications of the use of biphasic rather than monophasic repair in calculations of biologically-equivalent doses for pulsed-dose-rate brachytherapy of cervix carcinoma. Methods: Calculations are presented of pulsed-dose-rate (PDR) doses equivalent to former low-dose-rate (LDR) doses, using biphasic vs monophasic repair kinetics, both for cervical carcinoma and for the organ at risk (OAR), namely the rectum. The linear-quadratic modelling calculations included effects due to varying the dose per PDR cycle, the dose reduction factor for the OAR compared with Point A, the repair kinetics and the source strength. Results: When using the recommended 1 Gy per hourly PDR cycle, different LDR-equivalent PDR rectal doses were calculated depending on the choice of monophasic or biphasic repair kinetics pertaining to the rodent central nervous and skin systems. These differences virtually disappeared when the dose per hourly cycle was increased to 1.7 Gy. This made the LDR-equivalent PDR doses more robust and independent of the choice of repair kinetics and α/β ratios as a consequence of the described concept of extended equivalence. Conclusion: The use of biphasic and monophasic repair kinetics for optimised modelling of the effects on the OAR in PDR brachytherapy suggests that an optimised PDR protocol with the dose per hourly cycle nearest to 1.7 Gy could be used. Hence, the durations of the new PDR treatments would be similar to those of the former LDR treatments and not longer as currently prescribed. Advances in knowledge: Modelling calculations indicate that equivalent PDR protocols can be developed which are less dependent on the different α/β ratios and monophasic/biphasic kinetics usually attributed to normal and tumour tissues for treatment of cervical carcinoma. PMID:23934965
Influential Nonegligible Parameters under the Search Linear Model.
1986-04-25
lack of fit as wi 2 SSL0F(1 ) - I n u~ -(M) (12) and the sum of squares due to pure error as SSPE - I I (Y V-2 (13) For I 1,.,2) we define F(i) SSL0F...SSE (I) Noting that the numerator on the RHS of the above expression does not depend on i, we get the equivalence of (a) and (b). Again, SSE ) SSPE ...SSLOFM I and SSPE does not depend on i. Therefore (a) and (c) are equivalent. - From (14), the equivalence of (c) and (d) is clear. From (3), (6
Wang, Xianli; Kang, Haiyan; Wu, Junfeng
2016-05-01
Given the potential risks of chlorinated polycyclic aromatic hydrocarbons, the analysis of their presence in water is very urgent. We have developed a novel procedure for determining chlorinated polycyclic aromatic hydrocarbons in water based on solid-phase extraction coupled with gas chromatography and mass spectrometry. The extraction parameters of solid-phase extraction were optimized in detail. Under the optimal conditions, the proposed method showed wide linear ranges (1.0-1000 ng/L) with correlation coefficients ranging from 0.9952 to 0.9998. The limits of detection and the limits of quantification were in the range of 0.015-0.591 and 0.045-1.502 ng/L, respectively. Recoveries ranged from 82.5 to 102.6% with relative standard deviations below 9.2%. The obtained method was applied successfully to the determination of chlorinated polycyclic aromatic hydrocarbons in real water samples. Most of the chlorinated polycyclic aromatic hydrocarbons were detected and 1-monochloropyrene was predominant in the studied water samples. This is the first report of chlorinated polycyclic aromatic hydrocarbons in water samples in China. The toxic equivalency quotients of chlorinated polycyclic aromatic hydrocarbons in the studied tap water were 9.95 ng the toxic equivalency quotient m(-3) . 9,10-Dichloroanthracene and 1-monochloropyrene accounted for the majority of the total toxic equivalency quotients of chlorinated polycyclic aromatic hydrocarbons in tap water. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Schneiderman, Eva; Colón, Ellen L; White, Donald J; Schemehorn, Bruce; Ganovsky, Tara; Haider, Amir; Garcia-Godoy, Franklin; Morrow, Brian R; Srimaneepong, Viritpon; Chumprasert, Sujin
2017-09-01
We have previously reported on progress toward the refinement of profilometry-based abrasivity testing of dentifrices using a V8 brushing machine and tactile or optical measurement of dentin wear. The general application of this technique may be advanced by demonstration of successful inter-laboratory confirmation of the method. The objective of this study was to explore the capability of different laboratories in the assessment of dentifrice abrasivity using a profilometry-based evaluation technique developed in our Mason laboratories. In addition, we wanted to assess the interchangeability of human and bovine specimens. Participating laboratories were instructed in methods associated with Radioactive Dentin Abrasivity-Profilometry Equivalent (RDA-PE) evaluation, including site visits to discuss critical elements of specimen preparation, masking, profilometry scanning, and procedures. Laboratories were likewise instructed on the requirement for demonstration of proportional linearity as a key condition for validation of the technique. Laboratories were provided with four test dentifrices, blinded for testing, with a broad range of abrasivity. In each laboratory, a calibration curve was developed for varying V8 brushing strokes (0, 4,000, and 10,000 strokes) with the ISO abrasive standard. Proportional linearity was determined as the ratio of standard abrasion mean depths created with 4,000 and 10,000 strokes (2.5 fold differences). Criteria for successful calibration within the method (established in our Mason laboratory) was set at proportional linearity = 2.5 ± 0.3. RDA-PE was compared to Radiotracer RDA for the four test dentifrices, with the latter obtained by averages from three independent Radiotracer RDA sites. Individual laboratories and their results were compared by 1) proportional linearity and 2) acquired RDA-PE values for test pastes. Five sites participated in the study. One site did not pass proportional linearity objectives. Data for this site are not reported at the request of the researchers. Three of the remaining four sites reported herein tested human dentin and all three met proportional linearity objectives for human dentin. Three of four sites participated in testing bovine dentin and all three met the proportional linearity objectives for bovine dentin. RDA-PE values for test dentifrices were similar between sites. All four sites that met proportional linearity requirement successfully identified the dentifrice formulated above the industry standard 250 RDA (as RDA-PE). The profilometry method showed at least as good reproducibility and differentiation as Radiotracer assessments. It was demonstrated that human and bovine specimens could be used interchangeably. The standardized RDA-PE method was reproduced in multiple laboratories in this inter-laboratory study. Evidence supports that this method is a suitable technique for ISO method 11609 Annex B.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, VFG; Xie, HK
2014-07-01
This paper presents the fabrication and characterization of a high-density multilayer stacked metal-insulator-metal (MIM) capacitor based on a novel process of depositing the MIM multilayer on pillars followed by polishing and selective etching steps to form a stacked capacitor with merely three photolithography steps. In this paper, the pillars were made of glass to prevent substrate loss, whereas an oxide-nitride-oxide dielectric was employed for lower leakage, better voltage/frequency linearity, and better stress compensation. MIM capacitors with six dielectric layers were successfully fabricated, yielding capacitance density of 3.8 fF/mu m(2), maximum capacitance of 2.47 nF, and linear and quadratic voltage coefficientsmore » of capacitance below 21.2 ppm/V and 2.31 ppm/V-2. The impedance was measured from 40 Hz to 3 GHz, and characterized by an analytically derived equivalent circuit model to verify the radio frequency applicability. The multilayer stacking-induced plate resistance mismatch and its effect on the equivalent series resistance (ESR) and effective capacitance was also investigated, which can be counteracted by a corrected metal thickness design. A low ESR of 800 m Omega was achieved, whereas the self-resonance frequency was >760 MHz, successfully demonstrating the feasibility of this method to scale up capacitance densities for high-quality-factor, high-frequency, and large-value MIM capacitors.« less
Multi-temperature state-dependent equivalent circuit discharge model for lithium-sulfur batteries
NASA Astrophysics Data System (ADS)
Propp, Karsten; Marinescu, Monica; Auger, Daniel J.; O'Neill, Laura; Fotouhi, Abbas; Somasundaram, Karthik; Offer, Gregory J.; Minton, Geraint; Longo, Stefano; Wild, Mark; Knap, Vaclav
2016-10-01
Lithium-sulfur (Li-S) batteries are described extensively in the literature, but existing computational models aimed at scientific understanding are too complex for use in applications such as battery management. Computationally simple models are vital for exploitation. This paper proposes a non-linear state-of-charge dependent Li-S equivalent circuit network (ECN) model for a Li-S cell under discharge. Li-S batteries are fundamentally different to Li-ion batteries, and require chemistry-specific models. A new Li-S model is obtained using a 'behavioural' interpretation of the ECN model; as Li-S exhibits a 'steep' open-circuit voltage (OCV) profile at high states-of-charge, identification methods are designed to take into account OCV changes during current pulses. The prediction-error minimization technique is used. The model is parameterized from laboratory experiments using a mixed-size current pulse profile at four temperatures from 10 °C to 50 °C, giving linearized ECN parameters for a range of states-of-charge, currents and temperatures. These are used to create a nonlinear polynomial-based battery model suitable for use in a battery management system. When the model is used to predict the behaviour of a validation data set representing an automotive NEDC driving cycle, the terminal voltage predictions are judged accurate with a root mean square error of 32 mV.
Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M
2012-08-01
This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Lewis, F L; Vamvoudakis, Kyriakos G
2011-02-01
Approximate dynamic programming (ADP) is a class of reinforcement learning methods that have shown their importance in a variety of applications, including feedback control of dynamical systems. ADP generally requires full information about the system internal states, which is usually not available in practical situations. In this paper, we show how to implement ADP methods using only measured input/output data from the system. Linear dynamical systems with deterministic behavior are considered herein, which are systems of great interest in the control system community. In control system theory, these types of methods are referred to as output feedback (OPFB). The stochastic equivalent of the systems dealt with in this paper is a class of partially observable Markov decision processes. We develop both policy iteration and value iteration algorithms that converge to an optimal controller that requires only OPFB. It is shown that, similar to Q -learning, the new methods have the important advantage that knowledge of the system dynamics is not needed for the implementation of these learning algorithms or for the OPFB control. Only the order of the system, as well as an upper bound on its "observability index," must be known. The learned OPFB controller is in the form of a polynomial autoregressive moving-average controller that has equivalent performance with the optimal state variable feedback gain.
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
Mikou, M; Ghosne, N; El Baydaoui, R; Zirari, Z; Kuntz, F
2015-05-01
Performance characteristics of the megavoltage photon dose measurements with EPR and table sugar were analyzed. An advantage of sugar as a dosimetric material is its tissue equivalency. The minimal detectable dose was found to be 1.5Gy for both the 6 and 18MV photons. The dose response curves are linear up to at least 20Gy. The energy dependence of the dose response in the megavoltage energy range is very weak and probably statistically insignificant. Reproducibility of measurements of various doses in this range performed with the peak-to-peak and double-integral methods is reported. The method can be used in real-time dosimetry in radiation therapy. Copyright © 2015 Elsevier Ltd. All rights reserved.
Radio-frequency low-coherence interferometry.
Fernández-Pousa, Carlos R; Mora, José; Maestre, Haroldo; Corral, Pablo
2014-06-15
A method for retrieving low-coherence interferograms, based on the use of a microwave photonics filter, is proposed and demonstrated. The method is equivalent to the double-interferometer technique, with the scanning interferometer replaced by an analog fiber-optics link and the visibility recorded as the amplitude of its radio-frequency (RF) response. As a low-coherence interferometry system, it shows a decrease of resolution induced by the fiber's third-order dispersion (β3). As a displacement sensor, it provides highly linear and slope-scalable readouts of the interferometer's optical path difference in terms of RF, even in the presence of third-order dispersion. In a proof-of-concept experiment, we demonstrate 20-μm displacement readouts using C-band EDFA sources and standard single-mode fiber.
Asymptotic Stability of Interconnected Passive Non-Linear Systems
NASA Technical Reports Server (NTRS)
Isidori, A.; Joshi, S. M.; Kelkar, A. G.
1999-01-01
This paper addresses the problem of stabilization of a class of internally passive non-linear time-invariant dynamic systems. A class of non-linear marginally strictly passive (MSP) systems is defined, which is less restrictive than input-strictly passive systems. It is shown that the interconnection of a non-linear passive system and a non-linear MSP system is globally asymptotically stable. The result generalizes and weakens the conditions of the passivity theorem, which requires one of the systems to be input-strictly passive. In the case of linear time-invariant systems, it is shown that the MSP property is equivalent to the marginally strictly positive real (MSPR) property, which is much simpler to check.
Magnetoelectric Current Sensors
Bichurin, Mirza; Petrov, Roman; Leontiev, Viktor; Semenov, Gennadiy; Sokolov, Oleg
2017-01-01
In this work a magnetoelectric (ME) current sensor design based on a magnetoelectric effect is presented and discussed. The resonant and non-resonant type of ME current sensors are considered. Theoretical calculations of the ME current sensors by the equivalent circuit method were conducted. The application of different sensors using the new effects, for example, the ME effect, is made possible with the development of new ME composites. A large number of studies conducted in the field of new composites, allowed us to obtain a high magnetostrictive-piezoelectric laminate sensitivity. An optimal ME structure composition was matched. The characterization of a non-resonant current sensor showed that in the operation range to 5 A, the sensor had a sensitivity of 0.34 V/A, non-linearity less than 1% and for a resonant current sensor in the same operation range, the sensitivity was of 0.53 V/A, non-linearity less than 0.5%. PMID:28574486
Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey
2015-12-01
The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.
Li, YuHui; Jin, FeiTeng
2017-01-01
The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680
Primal/dual linear programming and statistical atlases for cartilage segmentation.
Glocker, Ben; Komodakis, Nikos; Paragios, Nikos; Glaser, Christian; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel approach for automatic segmentation of cartilage using a statistical atlas and efficient primal/dual linear programming. To this end, a novel statistical atlas construction is considered from registered training examples. Segmentation is then solved through registration which aims at deforming the atlas such that the conditional posterior of the learned (atlas) density is maximized with respect to the image. Such a task is reformulated using a discrete set of deformations and segmentation becomes equivalent to finding the set of local deformations which optimally match the model to the image. We evaluate our method on 56 MRI data sets (28 used for the model and 28 used for evaluation) and obtain a fully automatic segmentation of patella cartilage volume with an overlap ratio of 0.84 with a sensitivity and specificity of 94.06% and 99.92%, respectively.
NASA Technical Reports Server (NTRS)
Fasnacht, Zachary; Qin, Wenhan; Haffner, David P.; Loyola, Diego; Joiner, Joanna; Krotkov, Nickolay; Vasilkov, Alexander; Spurr, Robert
2017-01-01
Surface Lambertian-equivalent reflectivity (LER) is important for trace gas retrievals in the direct calculation of cloud fractions and indirect calculation of the air mass factor. Current trace gas retrievals use climatological surface LER's. Surface properties that impact the bidirectional reflectance distribution function (BRDF) as well as varying satellite viewing geometry can be important for retrieval of trace gases. Geometry Dependent LER (GLER) captures these effects with its calculation of sun normalized radiances (I/F) and can be used in current LER algorithms (Vasilkov et al. 2016). Pixel by pixel radiative transfer calculations are computationally expensive for large datasets. Modern satellite missions such as the Tropospheric Monitoring Instrument (TROPOMI) produce very large datasets as they take measurements at much higher spatial and spectral resolutions. Look up table (LUT) interpolation improves the speed of radiative transfer calculations but complexity increases for non-linear functions. Neural networks perform fast calculations and can accurately predict both non-linear and linear functions with little effort.
The Capacity Gain of Orbital Angular Momentum Based Multiple-Input-Multiple-Output System
Zhang, Zhuofan; Zheng, Shilie; Chen, Yiling; Jin, Xiaofeng; Chi, Hao; Zhang, Xianmin
2016-01-01
Wireless communication using electromagnetic wave carrying orbital angular momentum (OAM) has attracted increasing interest in recent years, and its potential to increase channel capacity has been explored widely. In this paper, we compare the technique of using uniform linear array consist of circular traveling-wave OAM antennas for multiplexing with the conventional multiple-in-multiple-out (MIMO) communication method, and numerical results show that the OAM based MIMO system can increase channel capacity while communication distance is long enough. An equivalent model is proposed to illustrate that the OAM multiplexing system is equivalent to a conventional MIMO system with a larger element spacing, which means OAM waves could decrease the spatial correlation of MIMO channel. In addition, the effects of some system parameters, such as OAM state interval and element spacing, on the capacity advantage of OAM based MIMO are also investigated. Our results reveal that OAM waves are complementary with MIMO method. OAM waves multiplexing is suitable for long-distance line-of-sight (LoS) communications or communications in open area where the multi-path effect is weak and can be used in massive MIMO systems as well. PMID:27146453
Monte Carlo study of neutron-ambient dose equivalent to patient in treatment room.
Mohammadi, A; Afarideh, H; Abbasi Davani, F; Ghergherehchi, M; Arbabi, A
2016-12-01
This paper presents an analytical method for the calculation of the neutron ambient dose equivalent H* (10) regarding patients, whereby the different concrete types that are used in the surrounding walls of the treatment room are considered. This work has been performed according to a detailed simulation of the Varian 2300C/D linear accelerator head that is operated at 18MV, and silver activation counter as a neutron detector, for which the Monte Carlo MCNPX 2.6 code is used, with and without the treatment room walls. The results show that, when compared to the neutrons that leak from the LINAC, both the scattered and thermal neutrons are the major factors that comprise the out-of field neutron dose. The scattering factors for the limonite-steel, magnetite-steel, and ordinary concretes have been calculated as 0.91±0.09, 1.08±0.10, and 0.371±0.01, respectively, while the corresponding thermal factors are 34.22±3.84, 23.44±1.62, and 52.28±1.99, respectively (both the scattering and thermal factors are for the isocenter region); moreover, the treatment room is composed of magnetite-steel and limonite-steel concretes, so the neutron doses to the patient are 1.79 times and 1.62 times greater than that from an ordinary concrete composition. The results also confirm that the scattering and thermal factors do not depend on the details of the chosen linear accelerator head model. It is anticipated that the results of the present work will be of great interest to the manufacturers of medical linear accelerators. Copyright © 2016. Published by Elsevier Ltd.
Estimation of Sonic Fatigue by Reduced-Order Finite Element Based Analyses
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam
2006-01-01
A computationally efficient, reduced-order method is presented for prediction of sonic fatigue of structures exhibiting geometrically nonlinear response. A procedure to determine the nonlinear modal stiffness using commercial finite element codes allows the coupled nonlinear equations of motion in physical degrees of freedom to be transformed to a smaller coupled system of equations in modal coordinates. The nonlinear modal system is first solved using a computationally light equivalent linearization solution to determine if the structure responds to the applied loading in a nonlinear fashion. If so, a higher fidelity numerical simulation in modal coordinates is undertaken to more accurately determine the nonlinear response. Comparisons of displacement and stress response obtained from the reduced-order analyses are made with results obtained from numerical simulation in physical degrees-of-freedom. Fatigue life predictions from nonlinear modal and physical simulations are made using the rainflow cycle counting method in a linear cumulative damage analysis. Results computed for a simple beam structure under a random acoustic loading demonstrate the effectiveness of the approach and compare favorably with results obtained from the solution in physical degrees-of-freedom.
Generalised Transfer Functions of Neural Networks
NASA Astrophysics Data System (ADS)
Fung, C. F.; Billings, S. A.; Zhang, H.
1997-11-01
When artificial neural networks are used to model non-linear dynamical systems, the system structure which can be extremely useful for analysis and design, is buried within the network architecture. In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights. The derivation of the algorithm is established on the basis of the Taylor series expansion of the activation functions used in a particular neural network. This leads to a representation which is equivalent to the non-linear recursive polynomial model and enables the derivation of the transfer functions to be based on the harmonic expansion method. By mapping the neural network into the frequency domain information about the structure of the underlying non-linear system can be recovered. Numerical examples are included to demonstrate the application of the new algorithm. These examples show that the frequency response functions appear to be highly sensitive to the network topology and training, and that the time domain properties fail to reveal deficiencies in the trained network structure.
Direct localization of poles of a meromorphic function from measurements on an incomplete boundary
NASA Astrophysics Data System (ADS)
Nara, Takaaki; Ando, Shigeru
2010-01-01
This paper proposes an algebraic method to reconstruct the positions of multiple poles in a meromorphic function field from measurements on an arbitrary simple arc in it. A novel issue is the exactness of the algorithm depending on whether the arc is open or closed, and whether it encloses or does not enclose the poles. We first obtain a differential equation that can equivalently determine the meromorphic function field. From it, we derive linear equations that relate the elementary symmetric polynomials of the pole positions to weighted integrals of the field along the simple arc and end-point terms of the arc when it is an open one. Eliminating the end-point terms based on an appropriate choice of weighting functions and a combination of the linear equations, we obtain a simple system of linear equations for solving the elementary symmetric polynomials. We also show that our algorithm can be applied to a 2D electric impedance tomography problem. The effects of the proximity of the poles, the number of measurements and noise on the localization accuracy are numerically examined.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Flux quench in a system of interacting spinless fermions in one dimension
NASA Astrophysics Data System (ADS)
Nakagawa, Yuya O.; Misguich, Grégoire; Oshikawa, Masaki
2016-05-01
We study a quantum quench in a one-dimensional spinless fermion model (equivalent to the XXZ spin chain), where a magnetic flux is suddenly switched off. This quench is equivalent to imposing a pulse of electric field and therefore generates an initial particle current. This current is not a conserved quantity in the presence of a lattice and interactions, and we investigate numerically its time evolution after the quench, using the infinite time-evolving block decimation method. For repulsive interactions or large initial flux, we find oscillations that are governed by excitations deep inside the Fermi sea. At long times we observe that the current remains nonvanishing in the gapless cases, whereas it decays to zero in the gapped cases. Although the linear response theory (valid for a weak flux) predicts the same long-time limit of the current for repulsive and attractive interactions (relation with the zero-temperature Drude weight), larger nonlinearities are observed in the case of repulsive interactions compared with that of the attractive case.
NASA Astrophysics Data System (ADS)
van Horssen, Wim T.; Wang, Yandong; Cao, Guohua
2018-06-01
In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.
Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn
2013-09-06
The purpose of the study was to investigate the use of the equivalent square formula for determining the surface dose from a rectangular photon beam. A 6 MV therapeutic photon beam delivered from a Varian Clinac 23EX medical linear accelerator was modeled using the EGS4nrc Monte Carlo simulation package. It was then used to calculate the dose in the build-up region from both square and rectangular fields. The field patterns were defined by various settings of the X- and Y-collimator jaw ranging from 5 to 20 cm. Dose measurements were performed using a thermoluminescence dosimeter and a Markus parallel-plate ionization chamber on the four square fields (5 × 5, 10 × 10, 15 × 15, and 20 × 20 cm2). The surface dose was acquired by extrapolating the build-up doses to the surface. An equivalent square for a rectangular field was determined using the area-to-perimeter formula, and the surface dose of the equivalent square was estimated using the square-field data. The surface dose of square field increased linearly from approximately 10% to 28% as the side of the square field increased from 5 to 20 cm. The influence of collimator exchange on the surface dose was found to be not significant. The difference in the percentage surface dose of the rectangular field compared to that of the relevant equivalent square was insignificant and can be clinically neglected. The use of the area-to-perimeter formula for an equivalent square field can provide a clinically acceptable surface dose estimation for a rectangular field from a 6 MV therapy photon beam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rotsch, David A.; Brossard, Tom; Roussin, Ethan
Molybdenum-99, the mother of Tc-99m, can be produced from fission of U-235 in nuclear reactors and purified from fission products by the Cintichem process, later modified for low-enriched uranium (LEU) targets. The key step in this process is the precipitation of Mo with α-benzoin oxime (ABO). The stability of this complex to radiation has been examined. Molybdenum-ABO was irradiated with 3 MeV electrons produced by a Van de Graaff generator and 35 MeV electrons produced by a 50 MeV/25 kW electron linear accelerator. Dose equivalents of 1.7–31.2 kCi of Mo-99 were administered to freshly prepared Mo-ABO. Irradiated samples of Mo-ABOmore » were processed according to the LEU Modified-Cintichem process. The Van de Graaff data indicated good radiation stability of the Mo-ABO complex up to ~15 kCi dose equivalents of Mo-99 and nearly complete destruction at doses >24 kCi Mo-99. The linear accelerator data indicate that even at 6.2 kCi of Mo-99 equivalence of dose, the sample lost ~20% of Mo-99. The 20% loss of Mo-99 at this low dose may be attributed to thermal decomposition of the product from the heat deposited in the sample during irradiation.« less
Fabrication and kinetics study of nano-Al/NiO thermite film by electrophoretic deposition.
Zhang, Daixiong; Li, Xueming
2015-05-21
Nano-Al/NiO thermites were successfully prepared as film by electrophoretic deposition (EPD). For the key issue of this EPD, a mixture solvent of ethanol-acetylacetone (1:1 in volume) containing 0.00025 M nitric acid was proved to be a suitable dispersion system for EPD. The kinetics of electrophoretic deposition for both nano-Al and nano-NiO were investigated; the linear relation between deposition weight and deposition time in short time and parabolic relation in prolonged time were observed in both EPDs. The critical transition time between linear deposition kinetics and parabolic deposition kinetics for nano-Al and nano-NiO were 20 and 10 min, respectively. The theoretical calculation of the kinetics of electrophoretic deposition revealed that the equivalence ratio of nano-Al/NiO thermites film would be affected by the behavior of electrophoretic deposition for nano-Al and nano-NiO. The equivalence ratio remained steady when the linear deposition kinetics dominated for both nano-Al and nano-NiO. The equivalence ratio would change with deposition time when deposition kinetics for nano-NiO changed into parabolic kinetics dominated after 10 min. Therefore, the rule was suggested to be suitable for other EPD of bicomposites. We also studied thermodynamic properties of electrophoretic nano-Al/NiO thermites film as well as combustion performance.
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Y; Waldron, T; Pennington, E
Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for amore » single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p < 0.0527), 126 ± 43% (p < 0.0354), and 112 ± 32% (p < 0.0265) higher than those of I125-BT, respectively. The BED and EQD2 doses of the opposite retina were 52 ± 9% lower than I125-BT. The tumor SFED values were 25.2 ± 3.3 Gy and 29.1 ± 2.5 Gy when using USC and LQ models which can be delivered within 1 hour. All BED and EQD2 values using L-Q model were significantly larger when compared to the USC model (p < 0.0274) due to its large single fraction size (> 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.« less
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Zhao, Hongjie; Yu, Bing-Kun
1989-01-01
The optical depolarizing properties of simulated stratospheric aerosols were studied in laboratory laser (0.633 micrometer) backscattering experiments for application to polarization lidar observations. Clouds composed of sulfuric acid solution droplets, some treated with ammonia gas, were observed during evaporation. The results indicate that the formation of minute ammonium sulfate particles from the evaporation of acid droplets produces linear depolarization ratios of beta equivalent to 0.02, but beta equivalent to 0.10 to 0.15 are generated from aged acid cloud aerosols and acid droplet crystalization effects following the introduction of ammonia gas into the chamber. It is concluded that partially crystallized sulfuric acid droplets are a likely candidate for explaining the lidar beta equivalent to 0.10 values that have been observed in the lower stratosphere in the absence of the relatively strong backscattering from homogeneous sulfuric acid droplet (beta equivalent to 0) or ice crystal (beta equivalent to 0.5) clouds.
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Zhao, Hongjie; Yu, Bing-Kun
1988-01-01
The optical depolarizing properties of simulated stratospheric aerosols were studied in laboratory laser (0.633 micrometer) backscattering experiments for application to polarization lidar observations. Clouds composed of sulfuric acid solution droplets, some treated with ammonia gas, were observed during evaporation. The results indicate that the formation of minute ammonium sulfate particles from the evaporation of acid droplets produces linear depolarization ratios of beta equivalent to 0.02, but beta equivalent to 0.10 to 0.15 are generated from aged acid cloud aerosols and acid droplet crystallization effects following the introduction of ammonia gas into the chamber. It is concluded that partially crystallized sulfuric acid droplets are a likely candidate for explaining the lidar beta equivalent to 0.10 values that have been observed in the lower stratosphere in the absence of the relatively strong backscattering from homogeneous sulfuric acid droplet (beta equivalent to 0) or ice crystal (beta equivalent to 0.5) clouds.
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Identification of aerodynamic models for maneuvering aircraft
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Hu, C. C.
1992-01-01
A Fourier analysis method was developed to analyze harmonic forced-oscillation data at high angles of attack as functions of the angle of attack and its time rate of change. The resulting aerodynamic responses at different frequencies are used to build up the aerodynamic models involving time integrals of the indicial type. An efficient numerical method was also developed to evaluate these time integrals for arbitrary motions based on a concept of equivalent harmonic motion. The method was verified by first using results from two-dimensional and three-dimensional linear theories. The developed models for C sub L, C sub D, and C sub M based on high-alpha data for a 70 deg delta wing in harmonic motions showed accurate results in reproducing hysteresis. The aerodynamic models are further verified by comparing with test data using ramp-type motions.
Bateli, Maria; Ben Rahal, Ghada; Christmann, Marin; Vach, Kirstin; Kohal, Ralf-Joachim
2018-01-01
Objective To test whether or not the modified design of the test implant (intended to increase primary stability) has an equivalent effect on MBL compared to the control. Methods Forty patients were randomly assigned to receive test or control implants to be installed in identically dimensioned bony beds. Implants were radiographically monitored at installation, at prosthetic delivery, and after one year. Treatments were considered equivalent if the 90% confidence interval (CI) for the mean difference (MD) in MBL was in between −0.25 and 0.25 mm. Additionally, several soft tissue parameters and patient-reported outcome measures (PROMs) were evaluated. Linear mixed models were fitted for each patient to assess time effects on response variables. Results Thirty-three patients (21 males, 12 females; 58.2 ± 15.2 years old) with 81 implants (47 test, 34 control) were available for analysis after a mean observation period of 13.9 ± 4.5 months (3 dropouts, 3 missed appointments, and 1 missing file). The adjusted MD in MBL after one year was −0.13 mm (90% CI: −0.46–0.19; test group: −0.49; control group: −0.36; p = 0.507). Conclusion Both implant systems can be considered successful after one year of observation. Concerning MBL in the presented setup, equivalence of the treatments cannot be concluded. Registration This trial is registered with the German Clinical Trials Register (ID: DRKS00007877). PMID:29610765
Semiempirical Theories of the Affinities of Negative Atomic Ions
NASA Technical Reports Server (NTRS)
Edie, John W.
1961-01-01
The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within experimental error with all data, except with the most recent value of C, which lies 10% lower.
Phelps, G.A.
2008-01-01
This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.
Jendza, J A; Dilger, R N; Sands, J S; Adeola, O
2006-12-01
Two studies were conducted to determine the efficacy of an Escherichia coli-derived phytase (ECP) and its equivalency relative to inorganic phosphorus (iP) from monosodium phosphate (MSP). In Exp. 1, one thousand two hundred 1-d-old male broilers were used in a 42-d trial to assess the effect of ECP and iP supplementation on growth performance and nutrient digestibility. Dietary treatments were based on corn-soybean meal basal diets (BD) containing 239 and 221 g of CP, 8.2 and 6.6 g of Ca, and 2.4 and 1.5 g of nonphytate P (nPP) per kg for the starter and grower phases, respectively. Treatments consisted of the BD; the BD + 0.6, 1.2, or 1.8 g of iP from MSP per kg; and the BD + 250, 500, 750, or 1,000 phytase units (FTU) of ECP per kg. Increasing levels of MSP improved gain, gain:feed, and tibia ash (linear, P < 0.01). Increasing levels of ECP improved gain, gain:feed, tibia ash (linear, P < 0.01), apparent ileal digestibility of P, N, Arg, His, Phe, and Trp at d 21 (linear, P < 0.05), and apparent retention of P at d 21 (linear, P < 0.05). Increasing levels of ECP decreased apparent retention of energy (linear, P < 0.01). Five hundred FTU of ECP per kg was determined to be equivalent to the addition of 0.72, 0.78, and 1.19 g of iP from MSP per kg in broiler diets based on gain, feed intake, and bone ash, respectively. In Exp. 2, forty-eight 10-kg pigs were used in a 28-d trial to assess the effect of ECP and iP supplementation on growth performance and nutrient digestibility. Dietary treatments consisted of a positive control containing 6.1 and 3.5 g of Ca and nPP, respectively, per kg; a negative control (NC) containing 4.8 and 1.7 g of Ca and nPP, respectively, per kg; the NC diet plus 0.4, 0.8, or 1.2 g of iP from MSP per kg; and the NC diet plus 500, 750, or 1,000 FTU of ECP per kg. Daily gain improved (linear, P < 0.05) with ECP addition, as did apparent digestibility of Ca and P (linear, P < 0.01). Five hundred FTU of ECP per kg was determined to be equivalent to the addition of 0.49 and 1.00 g of iP from MSP per kg in starter pigs diets, based on ADG and bone ash, respectively.
NASA Astrophysics Data System (ADS)
Liu, Richeng; Li, Bo; Jiang, Yujing; Yu, Liyuan
2018-01-01
Hydro-mechanical properties of rock fractures are core issues for many geoscience and geo-engineering practices. Previous experimental and numerical studies have revealed that shear processes could greatly enhance the permeability of single rock fractures, yet the shear effects on hydraulic properties of fractured rock masses have received little attention. In most previous fracture network models, single fractures are typically presumed to be formed by parallel plates and flow is presumed to obey the cubic law. However, related studies have suggested that the parallel plate model cannot realistically represent the surface characters of natural rock fractures, and the relationship between flow rate and pressure drop will no longer be linear at sufficiently large Reynolds numbers. In the present study, a numerical approach was established to assess the effects of shear on the hydraulic properties of 2-D discrete fracture networks (DFNs) in both linear and nonlinear regimes. DFNs considering fracture surface roughness and variation of aperture in space were generated using an originally developed code DFNGEN. Numerical simulations by solving Navier-Stokes equations were performed to simulate the fluid flow through these DFNs. A fracture that cuts through each model was sheared and by varying the shear and normal displacements, effects of shear on equivalent permeability and nonlinear flow characteristics of DFNs were estimated. The results show that the critical condition of quantifying the transition from a linear flow regime to a nonlinear flow regime is: 10-4 〈 J < 10-3, where J is the hydraulic gradient. When the fluid flow is in a linear regime (i.e., J < 10-4), the relative deviation of equivalent permeability induced by shear, δ2, is linearly correlated with J with small variations, while for fluid flow in the nonlinear regime (J 〉 10-3), δ2 is nonlinearly correlated with J. A shear process would reduce the equivalent permeability significantly in the orientation perpendicular to the sheared fracture as much as 53.86% when J = 1, shear displacement Ds = 7 mm, and normal displacement Dn = 1 mm. By fitting the calculated results, the mathematical expression for δ2 is established to help choose proper governing equations when solving fluid flow problems in fracture networks.
Characterization of Perovskite Oxide/Semiconductor Heterostructures
NASA Astrophysics Data System (ADS)
Walker, Phillip
The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency of several different partitioning methods which demarcate flow fields into dynamically distinct regions, and the correlation of finite-time statistics from the advection-diffusion equation to these regions. For autonomous systems, invariant manifold theory can be used to separate the system into dynamically distinct regions. Despite there being no equivalent method for nonautonomous systems, a similar analysis can be done. Systems with general time dependencies must resort to using finite-time transport barriers for partitioning; these barriers are the edges of Lagrangian coherent structures (LCS), the analog to the stable and unstable manifolds of invariant manifold theory. Using the coherent structures of a flow to analyze the statistics of trapping, flight, and residence times, the signature of anomalous diffusion are obtained. This research also investigates the use of linear models for approximating the elements of the covariance matrix of nonlinear flows, and then applying the covariance matrix approximation over coherent regions. The first and second-order moments can be used to fully describe an ensemble evolution in linear systems, however there is no direct method for nonlinear systems. The problem is only compounded by the fact that the moments for nonlinear flows typically don't have analytic representations, therefore direct numerical simulations would be needed to obtain the moments throughout the domain. To circumvent these many computations, the nonlinear system is approximated as many linear systems for which analytic expressions for the moments exist. The parameters introduced in the linear models are obtained locally from the nonlinear deformation tensor.
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
Linear energy transfer in water phantom within SHIELD-HIT transport code
NASA Astrophysics Data System (ADS)
Ergun, A.; Sobolevsky, N.; Botvina, A. S.; Buyukcizmeci, N.; Latysheva, L.; Ogul, R.
2017-02-01
The effect of irradiation in tissue is important in hadron therapy for the dose measurement and treatment planning. This biological effect is defined by an equivalent dose H which depends on the Linear Energy Transfer (LET). Usually, H can be expressed in terms of the absorbed dose D and the quality factor K of the radiation under consideration. In literature, various types of transport codes have been used for modeling and simulation of the interaction of the beams of protons and heavier ions with tissue-equivalent materials. In this presentation we used SHIELD-HIT code to simulate decomposition of the absorbed dose by LET in water for 16O beams. A more detailed description of capabilities of the SHIELD-HIT code can be found in the literature.
Universal single level implicit algorithm for gasdynamics
NASA Technical Reports Server (NTRS)
Lombard, C. K.; Venkatapthy, E.
1984-01-01
A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.
Soil amplification with a strong impedance contrast: Boston, Massachusetts
Baise, Laurie G.; Kaklamanos, James; Berry, Bradford M; Thompson, Eric M.
2016-01-01
In this study, we evaluate the effect of strong sediment/bedrock impedance contrasts on soil amplification in Boston, Massachusetts, for typical sites along the Charles and Mystic Rivers. These sites can be characterized by artificial fill overlying marine sediments overlying glacial till and bedrock, where the depth to bedrock ranges from 20 to 80 m. The marine sediments generally consist of organic silts, sand, and Boston Blue Clay. We chose these sites because they represent typical foundation conditions in the city of Boston, and the soil conditions are similar to other high impedance contrast environments. The sediment/bedrock interface in this region results in an impedance ratio on the order of ten, which in turn results in a significant amplification of the ground motion. Using stratigraphic information derived from numerous boreholes across the region paired with geologic and geomorphologic constraints, we develop a depth-to-bedrock model for the greater Boston region. Using shear-wave velocity profiles from 30 locations, we develop average velocity profiles for sites mapped as artificial fill, glaciofluvial deposits, and bedrock. By pairing the depth-to-bedrock model with the surficial geology and the average shear-wave velocity profiles, we can predict soil amplification in Boston. We compare linear and equivalent-linear site response predictions for a soil layer of varying thickness over bedrock, and assess the effects of varying the bedrock shear-wave velocity (VSb) and quality factor (Q). In a moderate seismicity region like Boston, many earthquakes will result in ground motions that can be modeled with linear site response methods. We also assess the effect of bedrock depth on soil amplification for a generic soil profile in artificial fill, using both linear and equivalent-linear site response models. Finally, we assess the accuracy of the model results by comparing the predicted (linear site response) and observed site response at the Northeastern University (NEU) vertical seismometer array during the 2011 M 5.8 Mineral, Virginia, earthquake. Site response at the NEU vertical array results in amplification on the order of 10 times at a period between 0.7-0.8 s. The results from this study provide evidence that the mean short-period and mean intermediate-period amplification used in design codes (i.e., from the Fa and Fv site coefficients) may underpredict soil amplification in strong impedance contrast environments such as Boston.
Couto, José Guilherme; Bravo, Isabel; Pirraco, Rui
2011-09-01
The purpose of this work was the biological comparison between Low Dose Rate (LDR) and Pulsed Dose Rate (PDR) in cervical cancer regarding the discontinuation of the afterloading system used for the LDR treatments at our Institution since December 2009. In the first phase we studied the influence of the pulse dose and the pulse time in the biological equivalence between LDR and PDR treatments using the Linear Quadratic Model (LQM). In the second phase, the equivalent dose in 2 Gy/fraction (EQD(2)) for the tumor, rectum and bladder in treatments performed with both techniques was evaluated and statistically compared. All evaluated patients had stage IIB cervical cancer and were treated with External Beam Radiotherapy (EBRT) plus two Brachytherapy (BT) applications. Data were collected from 48 patients (26 patients treated with LDR and 22 patients with PDR). In the analyses of the influence of PDR parameters in the biological equivalence between LDR and PDR treatments (Phase 1), it was calculated that if the pulse dose in PDR was kept equal to the LDR dose rate, a small the-rapeutic loss was expected. If the pulse dose was decreased, the therapeutic window became larger, but a correction in the prescribed dose was necessary. In PDR schemes with 1 hour interval between pulses, the pulse time did not influence significantly the equivalent dose. In the comparison between the groups treated with LDR and PDR (Phase 2) we concluded that they were not equivalent, because in the PDR group the total EQD(2) for the tumor, rectum and bladder was smaller than in the LDR group; the LQM estimated that a correction in the prescribed dose of 6% to 10% was ne-cessary to avoid therapeutic loss. A correction in the prescribed dose was necessary; this correction should be achieved by calculating the PDR dose equivalent to the desired LDR total dose.
Nonlinear Aeroacoustics Computations by the Space-Time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2003-01-01
The Space-Time Conservation Element and Solution Element Method, or CE/SE Method for short, is a recently developed numerical method for conservation laws. Despite its second order accuracy in space and time, it possesses low dispersion errors and low dissipation. The method is robust enough to cover a wide range of compressible flows: from weak linear acoustic waves to strong discontinuous waves (shocks). An outstanding feature of the CE/SE scheme is its truly multi-dimensional, simple but effective non-reflecting boundary condition (NRBC), which is particularly valuable for computational aeroacoustics (CAA). In nature, the method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its careful treatment of the surface fluxes and geometry, it is different from the existing schemes. Currently, the CE/SE scheme has been developed to a matured stage that a 3-D unstructured CE/SE Navier-Stokes solver is already available. However, in the present review paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen and sketched in section 2. Then applications of the 2-D and 3-D CE/SE schemes to linear, and in particular, nonlinear aeroacoustics are depicted in sections 3, 4, and 5 to demonstrate its robustness and capability.
A high-fidelity method to analyze perturbation evolution in turbulent flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unnikrishnan, S., E-mail: sasidharannair.1@osu.edu; Gaitonde, Datta V., E-mail: gaitonde.3@osu.edu
2016-04-01
Small perturbation propagation in fluid flows is usually examined by linearizing the governing equations about a steady basic state. It is often useful, however, to study perturbation evolution in the unsteady evolving turbulent environment. Such analyses can elucidate the role of perturbations in the generation of coherent structures or the production of noise from jet turbulence. The appropriate equations are still the linearized Navier–Stokes equations, except that the linearization must be performed about the instantaneous evolving turbulent state, which forms the coefficients of the linearized equations. This is a far more difficult problem since in addition to the turbulent state,more » its rate of change and the perturbation field are all required at each instant. In this paper, we develop and use a novel technique for this problem by using a pair (denoted “baseline” and “twin”) of simultaneous synchronized Large-Eddy Simulations (LES). At each time-step, small disturbances whose propagation characteristics are to be studied, are introduced into the twin through a forcing term. At subsequent time steps, the difference between the two simulations is shown to be equivalent to solving the forced Navier–Stokes equations, linearized about the instantaneous turbulent state. The technique does not put constraints on the forcing, which could be arbitrary, e.g., white noise or other stochastic variants. We consider, however, “native” forcing having properties of disturbances that exist naturally in the turbulent environment. The method then isolates the effect of turbulence in a particular region on the rest of the field, which is useful in the study of noise source localization. The synchronized technique is relatively simple to implement into existing codes. In addition to minimizing the storage and retrieval of large time-varying datasets, it avoids the need to explicitly linearize the governing equations, which can be a very complicated task for viscous terms or turbulence closures. The method is illustrated by application to a well-validated Mach 1.3 jet. Specifically, the effects of turbulence on the jet lipline and core collapse regions on the near-acoustic field are isolated. The properties of the method, including linearity and effect of initial transients, are discussed. The results provide insight into how turbulence from different parts of the jet contribute to the observed dominance of low and high frequency content at shallow and sideline angles, respectively.« less
A high-fidelity method to analyze perturbation evolution in turbulent flows
NASA Astrophysics Data System (ADS)
Unnikrishnan, S.; Gaitonde, Datta V.
2016-04-01
Small perturbation propagation in fluid flows is usually examined by linearizing the governing equations about a steady basic state. It is often useful, however, to study perturbation evolution in the unsteady evolving turbulent environment. Such analyses can elucidate the role of perturbations in the generation of coherent structures or the production of noise from jet turbulence. The appropriate equations are still the linearized Navier-Stokes equations, except that the linearization must be performed about the instantaneous evolving turbulent state, which forms the coefficients of the linearized equations. This is a far more difficult problem since in addition to the turbulent state, its rate of change and the perturbation field are all required at each instant. In this paper, we develop and use a novel technique for this problem by using a pair (denoted "baseline" and "twin") of simultaneous synchronized Large-Eddy Simulations (LES). At each time-step, small disturbances whose propagation characteristics are to be studied, are introduced into the twin through a forcing term. At subsequent time steps, the difference between the two simulations is shown to be equivalent to solving the forced Navier-Stokes equations, linearized about the instantaneous turbulent state. The technique does not put constraints on the forcing, which could be arbitrary, e.g., white noise or other stochastic variants. We consider, however, "native" forcing having properties of disturbances that exist naturally in the turbulent environment. The method then isolates the effect of turbulence in a particular region on the rest of the field, which is useful in the study of noise source localization. The synchronized technique is relatively simple to implement into existing codes. In addition to minimizing the storage and retrieval of large time-varying datasets, it avoids the need to explicitly linearize the governing equations, which can be a very complicated task for viscous terms or turbulence closures. The method is illustrated by application to a well-validated Mach 1.3 jet. Specifically, the effects of turbulence on the jet lipline and core collapse regions on the near-acoustic field are isolated. The properties of the method, including linearity and effect of initial transients, are discussed. The results provide insight into how turbulence from different parts of the jet contribute to the observed dominance of low and high frequency content at shallow and sideline angles, respectively.
Spectral Reconstruction Based on Svm for Cross Calibration
NASA Astrophysics Data System (ADS)
Gao, H.; Ma, Y.; Liu, W.; He, H.
2017-05-01
Chinese HY-1C/1D satellites will use a 5nm/10nm-resolutional visible-near infrared(VNIR) hyperspectral sensor with the solar calibrator to cross-calibrate with other sensors. The hyperspectral radiance data are composed of average radiance in the sensor's passbands and bear a spectral smoothing effect, a transform from the hyperspectral radiance data to the 1-nm-resolution apparent spectral radiance by spectral reconstruction need to be implemented. In order to solve the problem of noise cumulation and deterioration after several times of iteration by the iterative algorithm, a novel regression method based on SVM is proposed, which can approach arbitrary complex non-linear relationship closely and provide with better generalization capability by learning. In the opinion of system, the relationship between the apparent radiance and equivalent radiance is nonlinear mapping introduced by spectral response function(SRF), SVM transform the low-dimensional non-linear question into high-dimensional linear question though kernel function, obtaining global optimal solution by virtue of quadratic form. The experiment is performed using 6S-simulated spectrums considering the SRF and SNR of the hyperspectral sensor, measured reflectance spectrums of water body and different atmosphere conditions. The contrastive result shows: firstly, the proposed method is with more reconstructed accuracy especially to the high-frequency signal; secondly, while the spectral resolution of the hyperspectral sensor reduces, the proposed method performs better than the iterative method; finally, the root mean square relative error(RMSRE) which is used to evaluate the difference of the reconstructed spectrum and the real spectrum over the whole spectral range is calculated, it decreses by one time at least by proposed method.
NASA Astrophysics Data System (ADS)
He, Xin; Frey, Eric C.
2007-03-01
Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Fall with Linear Drag and Wien's Displacement Law: Approximate Solution and Lambert Function
ERIC Educational Resources Information Center
Vial, Alexandre
2012-01-01
We present an approximate solution for the downward time of travel in the case of a mass falling with a linear drag force. We show how a quasi-analytical solution implying the Lambert function can be found. We also show that solving the previous problem is equivalent to the search for Wien's displacement law. These results can be of interest for…
Code of Federal Regulations, 2010 CFR
2010-07-01
... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53...
Adiabatic dynamics of one-dimensional classical Hamiltonian dissipative systems
NASA Astrophysics Data System (ADS)
Pritula, G. M.; Petrenko, E. V.; Usatenko, O. V.
2018-02-01
A linearized plane pendulum with the slowly varying mass and length of string and the suspension point moving at a slowly varying speed is presented as an example of a simple 1D mechanical system described by the generalized harmonic oscillator equation, which is a basic model in discussion of the adiabatic dynamics and geometric phase. The expression for the pendulum geometric phase is obtained by three different methods. The pendulum is shown to be canonically equivalent to the damped harmonic oscillator. This supports the mathematical conclusion, not widely accepted in physical community, of no difference between the dissipative and Hamiltonian 1D systems.
A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.
Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad
2012-01-01
The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.
Validation of Milliflex® Quantum for Bioburden Testing of Pharmaceutical Products.
Gordon, Oliver; Goverde, Marcel; Staerk, Alexandra; Roesti, David
2017-01-01
This article reports the validation strategy used to demonstrate that the Milliflex ® Quantum yielded non-inferior results to the traditional bioburden method. It was validated according to USP <1223>, European Pharmacopoeia 5.1.6, and Parenteral Drug Association Technical Report No. 33 and comprised the validation parameters robustness, ruggedness, repeatability, specificity, limit of detection and quantification, accuracy, precision, linearity, range, and equivalence in routine operation. For the validation, a combination of pharmacopeial ATCC strains as well as a broad selection of in-house isolates were used. In-house isolates were used in stressed state. Results were statistically evaluated regarding the pharmacopeial acceptance criterion of ≥70% recovery compared to the traditional method. Post-hoc test power calculations verified the appropriateness of the used sample size to detect such a difference. Furthermore, equivalence tests verified non-inferiority of the rapid method as compared to the traditional method. In conclusion, the rapid bioburden on basis of the Milliflex ® Quantum was successfully validated as alternative method to the traditional bioburden test. LAY ABSTRACT: Pharmaceutical drug products must fulfill specified quality criteria regarding their microbial content in order to ensure patient safety. Drugs that are delivered into the body via injection, infusion, or implantation must be sterile (i.e., devoid of living microorganisms). Bioburden testing measures the levels of microbes present in the bulk solution of a drug before sterilization, and thus it provides important information for manufacturing a safe product. In general, bioburden testing has to be performed using the methods described in the pharmacopoeias (membrane filtration or plate count). These methods are well established and validated regarding their effectiveness; however, the incubation time required to visually identify microbial colonies is long. Thus, alternative methods that detect microbial contamination faster will improve control over the manufacturing process and speed up product release. Before alternative methods may be used, they must undergo a side-by-side comparison with pharmacopeial methods. In this comparison, referred to as validation, it must be shown in a statistically verified manner that the effectiveness of the alternative method is at least equivalent to that of the pharmacopeial methods. Here we describe the successful validation of an alternative bioburden testing method based on fluorescent staining of growing microorganisms applying the Milliflex ® Quantum system by MilliporeSigma. © PDA, Inc. 2017.
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
NASA Astrophysics Data System (ADS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-04-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.
Estimation of multiple accelerated motions using chirp-Fourier transform and clustering.
Alexiadis, Dimitrios S; Sergiadis, George D
2007-01-01
Motion estimation in the spatiotemporal domain has been extensively studied and many methodologies have been proposed, which, however, cannot handle both time-varying and multiple motions. Extending previously published ideas, we present an efficient method for estimating multiple, linearly time-varying motions. It is shown that the estimation of accelerated motions is equivalent to the parameter estimation of superpositioned chirp signals. From this viewpoint, one can exploit established signal processing tools such as the chirp-Fourier transform. It is shown that accelerated motion results in energy concentration along planes in the 4-D space: spatial frequencies-temporal frequency-chirp rate. Using fuzzy c-planes clustering, we estimate the plane/motion parameters. The effectiveness of our method is verified on both synthetic as well as real sequences and its advantages are highlighted.
High-energy evolution to three loops
NASA Astrophysics Data System (ADS)
Caron-Huot, Simon; Herranen, Matti
2018-02-01
The Balitsky-Kovchegov equation describes the high-energy growth of gauge theory scattering amplitudes as well as nonlinear saturation effects which stop it. We obtain the three-loop corrections to the equation in planar N = 4 super Yang-Mills theory. Our method exploits a recently established equivalence with the physics of soft wide-angle radiation, so-called non-global logarithms, and thus yields at the same time the threeloop evolution equation for non-global logarithms. As a by-product of our analysis, we develop a Lorentz-covariant method to subtract infrared and collinear divergences in crosssection calculations in the planar limit. We compare our result in the linear regime with a recent prediction for the so-called Pomeron trajectory, and compare its collinear limit with predictions from the spectrum of twist-two operators.
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
Singular value description of a digital radiographic detector: Theory and measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyprianou, Iacovos S.; Badano, Aldo; Gallas, Brandon D.
The H operator represents the deterministic performance of any imaging system. For a linear, digital imaging system, this system operator can be written in terms of a matrix, H, that describes the deterministic response of the system to a set of point objects. A singular value decomposition of this matrix results in a set of orthogonal functions (singular vectors) that form the system basis. A linear combination of these vectors completely describes the transfer of objects through the linear system, where the respective singular values associated with each singular vector describe the magnitude with which that contribution to the objectmore » is transferred through the system. This paper is focused on the measurement, analysis, and interpretation of the H matrix for digital x-ray detectors. A key ingredient in the measurement of the H matrix is the detector response to a single x ray (or infinitestimal x-ray beam). The authors have developed a method to estimate the 2D detector shift-variant, asymmetric ray response function (RRF) from multiple measured line response functions (LRFs) using a modified edge technique. The RRF measurements cover a range of x-ray incident angles from 0 deg. (equivalent location at the detector center) to 30 deg. (equivalent location at the detector edge) for a standard radiographic or cone-beam CT geometric setup. To demonstrate the method, three beam qualities were tested using the inherent, Lu/Er, and Yb beam filtration. The authors show that measures using the LRF, derived from an edge measurement, underestimate the system's performance when compared with the H matrix derived using the RRF. Furthermore, the authors show that edge measurements must be performed at multiple directions in order to capture rotational asymmetries of the RRF. The authors interpret the results of the H matrix SVD and provide correlations with the familiar MTF methodology. Discussion is made about the benefits of the H matrix technique with regards to signal detection theory, and the characterization of shift-variant imaging systems.« less
Barrère, Caroline; Hubert-Roux, Marie; Lange, Catherine M; Rejaibi, Majed; Kebir, Nasreddine; Désilles, Nicolas; Lecamp, Laurence; Burel, Fabrice; Loutelier-Bourhis, Corinne
2012-06-15
Polyamides (PA) belong to the most used classes of polymers because of their attractive chemical and mechanical properties. In order to monitor original PA design, it is essential to develop analytical methods for the characterization of these compounds that are mostly insoluble in usual solvents. A low molecular weight polyamide (PA11), synthesized with a chain limiter, has been used as a model compound and characterized by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS). In the solvent-based approach, specific solvents for PA, i.e. trifluoroacetic acid (TFA) and hexafluoroisopropanol (HFIP), were tested. Solvent-based sample preparation methods, dried-droplet and thin layer, were optimized through the choice of matrix and salt. Solvent-based (thin layer) and solvent-free methods were then compared for this low solubility polymer. Ultra-high-performance liquid chromatography/electrospray ionization (UHPLC/ESI)-TOF-MS analyses were then used to confirm elemental compositions through accurate mass measurement. Sodium iodide (NaI) and 2,5-dihydroxybenzoic acid (2,5-DHB) are, respectively, the best cationizing agent and matrix. The dried-droplet sample preparation method led to inhomogeneous deposits, but the thin-layer method could overcome this problem. Moreover, the solvent-free approach was the easiest and safest sample preparation method giving equivalent results to solvent-based methods. Linear as well as cyclic oligomers were observed. Although the PA molecular weights obtained by MALDI-TOF-MS were lower than those obtained by (1)H NMR and acido-basic titration, this technique allowed us to determine the presence of cyclic and linear species, not differentiated by the other techniques. TFA was shown to induce modification of linear oligomers that permitted cyclic and linear oligomers to be clearly highlighted in spectra. Optimal sample preparation conditions were determined for the MALDI-TOF-MS analysis of PA11, a model of polyamide analogues. The advantages of the solvent-free and solvent-based approaches were shown. Molecular weight determination using MALDI was discussed. Copyright © 2012 John Wiley & Sons, Ltd.
Milbrath, Meghan O’Grady; Wenger, Yvan; Chang, Chiung-Wen; Emond, Claude; Garabrant, David; Gillespie, Brenda W.; Jolliet, Olivier
2009-01-01
Objective In this study we reviewed the half-life data in the literature for the 29 dioxin, furan, and polychlorinated biphenyl congeners named in the World Health Organization toxic equivalency factor scheme, with the aim of providing a reference value for the half-life of each congener in the human body and a method of half-life estimation that accounts for an individual’s personal characteristics. Data sources and extraction We compared data from > 30 studies containing congener-specific elimination rates. Half-life data were extracted and compiled into a summary table. We then created a subset of these data based on defined exclusionary criteria. Data synthesis We defined values for each congener that approximate the half-life in an infant and in an adult. A linear interpolation of these values was used to examine the relationship between half-life and age, percent body fat, and absolute body fat. We developed predictive equations based on these relationships and adjustments for individual characteristics. Conclusions The half-life of dioxins in the body can be predicted using a linear relationship with age adjusted for body fat, smoking, and breast-feeding. Data suggest an alternative method based on a linear relationship between half-life and total body fat, but this approach requires further testing and validation with individual measurements. PMID:19337517
Geometry of Conservation Laws for a Class of Parabolic Partial Differential Equations
NASA Astrophysics Data System (ADS)
Clelland, Jeanne Nielsen
1996-08-01
I consider the problem of computing the space of conservation laws for a second-order, parabolic partial differential equation for one function of three independent variables. The PDE is formulated as an exterior differential system {cal I} on a 12 -manifold M, and its conservation laws are identified with the vector space of closed 3-forms in the infinite prolongation of {cal I} modulo the so -called "trivial" conservation laws. I use the tools of exterior differential systems and Cartan's method of equivalence to study the structure of the space of conservation laws. My main result is:. Theorem. Any conservation law for a second-order, parabolic PDE for one function of three independent variables can be represented by a closed 3-form in the differential ideal {cal I} on the original 12-manifold M. I show that if a nontrivial conservation law exists, then {cal I} has a deprolongation to an equivalent system {cal J} on a 7-manifold N, and any conservation law for {cal I} can be expressed as a closed 3-form on N which lies in {cal J}. Furthermore, any such system in the real analytic category is locally equivalent to a system generated by a (parabolic) equation of the formA(u _{xx}u_{yy}-u_sp {xy}{2}) + B_1u_{xx }+2B_2u_{xy} +B_3u_ {yy}+C=0crwhere A, B_{i}, C are functions of x, y, t, u, u_{x}, u _{y}, u_{t}. I compute the space of conservation laws for several examples, and I begin the process of analyzing the general case using Cartan's method of equivalence. I show that the non-linearizable equation u_{t} = {1over2}e ^{-u}(u_{xx}+u_ {yy})has an infinite-dimensional space of conservation laws. This stands in contrast to the two-variable case, for which Bryant and Griffiths showed that any equation whose space of conservation laws has dimension 4 or more is locally equivalent to a linear equation, i.e., is linearizable.
Zhang, Da; Mihai, Georgeta; Barbaras, Larry G; Brook, Olga R; Palmer, Matthew R
2018-05-10
Water equivalent diameter (Dw) reflects patient's attenuation and is a sound descriptor of patient size, and is used to determine size-specific dose estimator from a CT examination. Calculating Dw from CT localizer radiographs makes it possible to utilize Dw before actual scans and minimizes truncation errors due to limited reconstructed fields of view. One obstacle preventing the user community from implementing this useful tool is the necessity to calibrate localizer pixel values so as to represent water equivalent attenuation. We report a practical method to ease this calibration process. Dw is calculated from water equivalent area (Aw) which is deduced from the average localizer pixel value (LPV) of the line(s) in the localizer radiograph that correspond(s) to the axial image. The calibration process is conducted to establish the relationship between Aw and LPV. Localizer and axial images were acquired from phantoms of different total attenuation. We developed a program that automates the geometrical association between axial images and localizer lines and manages the measurements of Dw and average pixel values. We tested the calibration method on three CT scanners: a GE CT750HD, a Siemens Definition AS, and a Toshiba Acquilion Prime80, for both posterior-anterior (PA) and lateral (LAT) localizer directions (for all CTs) and with different localizer filters (for the Toshiba CT). The computer program was able to correctly perform the geometrical association between corresponding axial images and localizer lines. Linear relationships between Aw and LPV were observed (with R 2 all greater than 0.998) on all tested conditions, regardless of the direction and image filters used on the localizer radiographs. When comparing LAT and PA directions with the same image filter and for the same scanner, the slope values were close (maximum difference of 0.02 mm), and the intercept values showed larger deviations (maximum difference of 2.8 mm). Water equivalent diameter estimation on phantoms and patients demonstrated high accuracy of the calibration: percentage difference between Dw from axial images and localizers was below 2%. With five clinical chest examinations and five abdominal-pelvic examinations of varying patient sizes, the maximum percentage difference was approximately 5%. Our study showed that Aw and LPV are highly correlated, providing enough evidence to allow for the Dw determination once the experimental calibration process is established. © 2018 American Association of Physicists in Medicine.
The risk equivalent of an exposure to-, versus a dose of radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, V.P.
The long-term potential carcinogenic effects of low-level exposure (LLE) are addressed. The principal point discussed is linear, no-threshold dose-response curve. That the linear no-threshold, or proportional relationship is widely used is seen in the way in which the values for cancer risk coefficients are expressed - in terms of new cases, per million persons exposed, per year, per unit exposure or dose. This implies that the underlying relationship is proportional, i.e., ''linear, without threshold''. 12 refs., 9 figs., 1 tab.
Periodic equivalence ratio modulation method and apparatus for controlling combustion instability
Richards, George A.; Janus, Michael C.; Griffith, Richard A.
2000-01-01
The periodic equivalence ratio modulation (PERM) method and apparatus significantly reduces and/or eliminates unstable conditions within a combustion chamber. The method involves modulating the equivalence ratio for the combustion device, such that the combustion device periodically operates outside of an identified unstable oscillation region. The equivalence ratio is modulated between preselected reference points, according to the shape of the oscillation region and operating parameters of the system. Preferably, the equivalence ratio is modulated from a first stable condition to a second stable condition, and, alternatively, the equivalence ratio is modulated from a stable condition to an unstable condition. The method is further applicable to multi-nozzle combustor designs, whereby individual nozzles are alternately modulated from stable to unstable conditions. Periodic equivalence ratio modulation (PERM) is accomplished by active control involving periodic, low frequency fuel modulation, whereby low frequency fuel pulses are injected into the main fuel delivery. Importantly, the fuel pulses are injected at a rate so as not to affect the desired time-average equivalence ratio for the combustion device.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
NASA Astrophysics Data System (ADS)
Behzad, Mehdi; Ghadami, Amin; Maghsoodi, Ameneh; Michael Hale, Jack
2013-11-01
In this paper, a simple method for detection of multiple edge cracks in Euler-Bernoulli beams having two different types of cracks is presented based on energy equations. Each crack is modeled as a massless rotational spring using Linear Elastic Fracture Mechanics (LEFM) theory, and a relationship among natural frequencies, crack locations and stiffness of equivalent springs is demonstrated. In the procedure, for detection of m cracks in a beam, 3m equations and natural frequencies of healthy and cracked beam in two different directions are needed as input to the algorithm. The main accomplishment of the presented algorithm is the capability to detect the location, severity and type of each crack in a multi-cracked beam. Concise and simple calculations along with accuracy are other advantages of this method. A number of numerical examples for cantilever beams including one and two cracks are presented to validate the method.
Correction of scatter in megavoltage cone-beam CT
NASA Astrophysics Data System (ADS)
Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.
2001-03-01
The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.
Thermal quantum time-correlation functions from classical-like dynamics
NASA Astrophysics Data System (ADS)
Hele, Timothy J. H.
2017-07-01
Thermal quantum time-correlation functions are of fundamental importance in quantum dynamics, allowing experimentally measurable properties such as reaction rates, diffusion constants and vibrational spectra to be computed from first principles. Since the exact quantum solution scales exponentially with system size, there has been considerable effort in formulating reliable linear-scaling methods involving exact quantum statistics and approximate quantum dynamics modelled with classical-like trajectories. Here, we review recent progress in the field with the development of methods including centroid molecular dynamics , ring polymer molecular dynamics (RPMD) and thermostatted RPMD (TRPMD). We show how these methods have recently been obtained from 'Matsubara dynamics', a form of semiclassical dynamics which conserves the quantum Boltzmann distribution. We also apply the Matsubara formalism to reaction rate theory, rederiving t → 0+ quantum transition-state theory (QTST) and showing that Matsubara-TST, like RPMD-TST, is equivalent to QTST. We end by surveying areas for future progress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J. C.; Baillet, S.; Jerbi, K.
2001-01-01
We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the proceduremore » is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.« less
On the rate of convergence of the alternating projection method in finite dimensional spaces
NASA Astrophysics Data System (ADS)
Galántai, A.
2005-10-01
Using the results of Smith, Solmon, and Wagner [K. Smith, D. Solomon, S. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Soc. 83 (1977) 1227-1270] and Nelson and Neumann [S. Nelson, M. Neumann, Generalizations of the projection method with application to SOR theory for Hermitian positive semidefinite linear systems, Numer. Math. 51 (1987) 123-141] we derive new estimates for the speed of the alternating projection method and its relaxed version in . These estimates can be computed in at most O(m3) arithmetic operations unlike the estimates in papers mentioned above that require spectral information. The new and old estimates are equivalent in many practical cases. In cases when the new estimates are weaker, the numerical testing indicates that they approximate the original bounds in papers mentioned above quite well.
Compact double-bunch x-ray free electron lasers for fresh bunch self-seeding and harmonic lasing
Emma, C.; Feng, Y.; Nguyen, D. C.; ...
2017-03-03
This study presents a novel method to improve the longitudinal coherence, efficiency and maximum photon energy of x-ray free electron lasers (XFELs). The method is equivalent to having two separate concatenated XFELs. The first uses one bunch of electrons to reach the saturation regime, generating a high power self-amplified spontaneous emission x-ray pulse at the fundamental and third harmonic. The x-ray pulse is filtered through an attenuator/monochromator and seeds a different electron bunch in the second FEL, using the fundamental and/or third harmonic as an input signal. In our method we combine the two XFELs operating with two bunches, separatedmore » by one or more rf cycles, in the same linear accelerator. We discuss the advantages and applications of the proposed system for present and future XFELs.« less
Characterizing hydrochemical properties of springs in Taiwan based on their geological origins.
Jang, Cheng-Shin; Chen, Jui-Sheng; Lin, Yun-Bin; Liu, Chen-Wuing
2012-01-01
This study was performed to characterize hydrochemical properties of springs based on their geological origins in Taiwan. Stepwise discriminant analysis (DA) was used to establish a linear classification model of springs using hydrochemical parameters. Two hydrochemical datasets-ion concentrations and relative proportions of equivalents per liter of major ions-were included to perform prediction of the geological origins of springs. Analyzed results reveal that DA using relative proportions of equivalents per liter of major ions yields a 95.6% right assignation, which is superior to DA using ion concentrations. This result indicates that relative proportions of equivalents of major hydrochemical parameters in spring water are more highly associated with the geological origins than ion concentrations do. Low percentages of Na(+) equivalents are common properties of springs emerging from acid-sulfate and neutral-sulfate igneous rock. Springs emerging from metamorphic rock show low percentages of Cl( - ) equivalents and high percentages of HCO[Formula: see text] equivalents, and springs emerging from sedimentary rock exhibit high Cl( - )/SO(2-)(4) ratios.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications
Austin, Peter C.
2017-01-01
Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.
Austin, Peter C
2017-08-01
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).
A two-step, fourth-order method with energy preserving properties
NASA Astrophysics Data System (ADS)
Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato
2012-09-01
We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
The Short Form 36 English and Chinese versions were equivalent in a multiethnic Asian population.
Tan, Maudrene L S; Wee, Hwee-Lin; Lee, Jeannette; Ma, Stefan; Heng, Derrick; Tai, E-Shyong; Thumboo, Julian
2013-07-01
The primary aim of this article was to evaluate measurement equivalence of the English and Chinese versions of the Short Form 36 version 2 (SF-36v2) and Short Form 6D (SF-6D). In this cross-sectional study, health-related quality of life (HRQoL) was measured from 4,973 ethnic Chinese subjects using the SF-36v2 questionnaire. Measurement equivalence of domain and utility scores for the English- and Chinese-language SF-36v2 and SF-6D were assessed by examining the score differences between the two languages using linear regression models, with and without adjustment for known determinants of HRQoL. Equivalence was achieved if the 90% confidence interval (CI) of the differences in scores, due to language, fell within a predefined equivalence margin. Compared with English-speaking Chinese, Chinese-speaking Chinese were significantly older (47.6 vs. 55.5 years). All SF-36v2 domains were equivalent after adjusting for known HRQoL. SF-6D utility/items had the 90% CI either fully or partially overlap their predefined equivalence margin. The English- and Chinese-language versions of the SF-36v2 and SF-6D demonstrated equivalence. Copyright © 2013 Elsevier Inc. All rights reserved.
A Note on Equivalence Among Various Scalar Field Models of Dark Energies
NASA Astrophysics Data System (ADS)
Mandal, Jyotirmay Das; Debnath, Ujjal
2017-08-01
In this work, we have tried to find out similarities between various available models of scalar field dark energies (e.g., quintessence, k-essence, tachyon, phantom, quintom, dilatonic dark energy, etc). We have defined an equivalence relation from elementary set theory between scalar field models of dark energies and used fundamental ideas from linear algebra to set up our model. Consequently, we have obtained mutually disjoint subsets of scalar field dark energies with similar properties and discussed our observation.
Virasoro constraints and polynomial recursion for the linear Hodge integrals
NASA Astrophysics Data System (ADS)
Guo, Shuai; Wang, Gehao
2017-04-01
The Hodge tau-function is a generating function for the linear Hodge integrals. It is also a tau-function of the KP hierarchy. In this paper, we first present the Virasoro constraints for the Hodge tau-function in the explicit form of the Virasoro equations. The expression of our Virasoro constraints is simply a linear combination of the Virasoro operators, where the coefficients are restored from a power series for the Lambert W function. Then, using this result, we deduce a simple version of the Virasoro constraints for the linear Hodge partition function, where the coefficients are restored from the Gamma function. Finally, we establish the equivalence relation between the Virasoro constraints and polynomial recursion formula for the linear Hodge integrals.
Rollinson, Njal; Holt, Sarah M; Massey, Melanie D; Holt, Richard C; Nancekivell, E Graham; Brooks, Ronald J
2018-05-01
Temperature has a strong effect on ectotherm development rate. It is therefore possible to construct predictive models of development that rely solely on temperature, which have applications in a range of biological fields. Here, we leverage a reference series of development stages for embryos of the turtle Chelydra serpentina, which was described at a constant temperature of 20 °C. The reference series acts to map each distinct developmental stage onto embryonic age (in days) at 20 °C. By extension, an embryo taken from any given incubation environment, once staged, can be assigned an equivalent age at 20 °C. We call this concept "Equivalent Development", as it maps the development stage of an embryo incubated at a given temperature to its equivalent age at a reference temperature. In the laboratory, we used the concept of Equivalent Development to estimate development rate of embryos of C. serpentina across a series of constant temperatures. Using these estimates of development rate, we created a thermal performance curve measured in units of Equivalent Development (TPC ED ). We then used the TPC ED to predict developmental stage of embryos in several natural turtle nests across six years. We found that 85% of the variation of development stage in natural nests could be explained. Further, we compared the predictive accuracy of the model based on the TPC ED to the predictive accuracy of a degree-day model, where development is assumed to be linearly related to temperature and the amount of accumulated heat is summed over time. Information theory suggested that the model based on the TPC ED better describes variation in developmental stage in wild nests than the degree-day model. We suggest the concept of Equivalent Development has several strengths and can be broadly applied. In particular, studies on temperature-dependent sex determination may be facilitated by the concept of Equivalent Development, as development age maps directly onto the developmental series of the organism, allowing critical periods of sex determination to be delineated without invasive sampling, even under fluctuating temperature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Design and application of quadrature compensation patterns in bulk silicon micro-gyroscopes.
Ni, Yunfang; Li, Hongsheng; Huang, Libin
2014-10-29
This paper focuses on the detailed design issues of a peculiar quadrature reduction method named system stiffness matrix diagonalization, whose key technology is the design and application of quadrature compensation patterns. For bulk silicon micro-gyroscopes, a complete design and application case was presented. The compensation principle was described first. In the mechanical design, four types of basic structure units were presented to obtain the basic compensation function. A novel layout design was proposed to eliminate the additional disturbing static forces and torques. Parameter optimization was carried out to maximize the available compensation capability in a limited layout area. Two types of voltage loading methods were presented. Their influences on the sense mode dynamics were analyzed. The proposed design was applied on a dual-mass silicon micro-gyroscope developed in our laboratory. The theoretical compensation capability of a quadrature equivalent angular rate no more than 412 °/s was designed. In experiments, an actual quadrature equivalent angular rate of 357 °/s was compensated successfully. The actual compensation voltages were a little larger than the theoretical ones. The correctness of the design and the theoretical analyses was verified. They can be commonly used in planar linear vibratory silicon micro-gyroscopes for quadrature compensation purpose.
Heritability of myopia and ocular biometrics in Koreans: the healthy twin study.
Kim, Myung Hun; Zhao, Di; Kim, Woori; Lim, Dong-Hui; Song, Yun-Mi; Guallar, Eliseo; Cho, Juhee; Sung, Joohon; Chung, Eui-Sang; Chung, Tae-Young
2013-05-01
To estimate the heritabilities of myopia and ocular biometrics among different family types among a Korean population. We studied 1508 adults in the Healthy Twin Study. Spherical equivalent, axial length, anterior chamber depth, and corneal astigmatism were measured by refraction, corneal topography, and A-scan ultrasonography. To see the degree of resemblance among different types of family relationships, intraclass correlation coefficients (ICC) were calculated. Variance-component methods were applied to estimate the genetic contributions to eye phenotypes as heritability based on the maximum likelihood estimation. Narrow sense heritability was calculated as the proportion of the total phenotypic variance explained by additive genetic effects, and linear and nonlinear effects of age, sex, and interactions between age and sex were adjusted. A total of 240 monozygotic twin pairs, 45 dizygotic twin pairs, and 938 singleton adult family members who were first-degree relatives of twins in 345 families were included in the study. ICCs for spherical equivalent from monozygotic twins, pooled first-degree pairs, and spouse pairs were 0.83, 0.34, and 0.20, respectively. The ICCs of other ocular biometrics were also significantly higher in monozygotic twins compared with other relative pairs, with greater consistency and conformity. The estimated narrow sense heritability (95% confidence interval) was 0.78 (0.71-0.84) for spherical equivalent; 0.86 (0.82-0.90) for axial length; 0.83 (0.76-0.91) for anterior chamber depth; and 0.70 (0.63-0.77) for corneal astigmatism. The estimated heritability of spherical equivalent and ocular biometrics in the Korean population suggests the compelling evidence that all traits are highly heritable.
Weigold, Arne; Weigold, Ingrid K; Russell, Elizabeth J
2013-03-01
Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as nonequivalent samples in different conditions due to recruitment, participant self-selection to conditions, and data collection procedures, as well as incomplete or inappropriate statistical procedures for examining equivalence. We conducted 2 studies examining the equivalence of paper-and-pencil and Internet data collection that accounted for these issues. In both studies, we used measures of personality, social desirability, and computer self-efficacy, and, in Study 2, we used personal growth initiative to assess quantitative equivalence (i.e., mean equivalence), qualitative equivalence (i.e., internal consistency and intercorrelations), and auxiliary equivalence (i.e., response rates, missing data, completion time, and comfort completing questionnaires using paper-and-pencil and the Internet). Study 1 investigated the effects of completing surveys via paper-and-pencil or the Internet in both traditional (i.e., lab) and natural (i.e., take-home) settings. Results indicated equivalence across conditions, except for auxiliary equivalence aspects of missing data and completion time. Study 2 examined mailed paper-and-pencil and Internet surveys without contact between experimenter and participants. Results indicated equivalence between conditions, except for auxiliary equivalence aspects of response rate for providing an address and completion time. Overall, the findings show that paper-and-pencil and Internet data collection methods are generally equivalent, particularly for quantitative and qualitative equivalence, with nonequivalence only for some aspects of auxiliary equivalence. PsycINFO Database Record (c) 2013 APA, all rights reserved.
40 CFR 53.11 - Cancellation of reference or equivalent method designation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Cancellation of reference or equivalent method designation. 53.11 Section 53.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General...
A pocket-sized metabolic analyzer for assessment of resting energy expenditure.
Zhao, Di; Xian, Xiaojun; Terrera, Mirna; Krishnan, Ranganath; Miller, Dylan; Bridgeman, Devon; Tao, Kevin; Zhang, Lihua; Tsow, Francis; Forzani, Erica S; Tao, Nongjian
2014-04-01
The assessment of metabolic parameters related to energy expenditure has a proven value for weight management; however these measurements remain too difficult and costly for monitoring individuals at home. The objective of this study is to evaluate the accuracy of a new pocket-sized metabolic analyzer device for assessing energy expenditure at rest (REE) and during sedentary activities (EE). The new device performs indirect calorimetry by measuring an individual's oxygen consumption (VO2) and carbon dioxide production (VCO2) rates, which allows the determination of resting- and sedentary activity-related energy expenditure. VO2 and VCO2 values of 17 volunteer adult subjects were measured during resting and sedentary activities in order to compare the metabolic analyzer with the Douglas bag method. The Douglas bag method is considered the Gold Standard method for indirect calorimetry. Metabolic parameters of VO2, VCO2, and energy expenditure were compared using linear regression analysis, paired t-tests, and Bland-Altman plots. Linear regression analysis of measured VO2 and VCO2 values, as well as calculated energy expenditure assessed with the new analyzer and Douglas bag method, had the following linear regression parameters (linear regression slope LRS0, and R-squared coefficient, r(2)) with p = 0: LRS0 (SD) = 1.00 (0.01), r(2) = 0.9933 for VO2; LRS0 (SD) = 1.00 (0.01), r(2) = 0.9929 for VCO2; and LRS0 (SD) = 1.00 (0.01), r(2) = 0.9942 for energy expenditure. In addition, results from paired t-tests did not show statistical significant difference between the methods with a significance level of α = 0.05 for VO2, VCO2, REE, and EE. Furthermore, the Bland-Altman plot for REE showed good agreement between methods with 100% of the results within ±2SD, which was equivalent to ≤10% error. The findings demonstrate that the new pocket-sized metabolic analyzer device is accurate for determining VO2, VCO2, and energy expenditure. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Methods for the accurate estimation of confidence intervals on protein folding ϕ-values
Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.
2006-01-01
ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
Preti, Robert A; Chan, Wai Shun; Kurtzberg, Joanne; Dornsife, Ronna E.; Wallace, Paul K.; Furlange, Rosemary; Lin, Anna; Omana-Zapata, Imelda; Bonig, Halvard; Tonn, Thorsten
2018-01-01
Background Evaluation of the BD™ Stem Cell Enumeration (SCE) Kit was conducted at four clinical sites with flow cytometry CD34+ enumeration, to assess agreement between two investigational methods, the BD FACSCanto™ II and BD FACSCalibur™ systems, and the predicate method (Beckman Coulter Stem-Kit™ reagents). Methods Leftover and delinked specimens (n = 1,032) from clinical flow cytometry testing were analyzed on the BD FACSCanto II (n = 918) and BD FACSCalibur (n = 905) in normal and mobilized blood, frozen and thawed bone marrow, and leucopheresis and cord blood anticoagulated with CPD, ACD-A, heparin, and EDTA alone or in combination. Fresh leucopheresis analysis addressed site equivalency for sample preparation, testing, and analysis. Results The mean relative bias showed agreement within predefined parameters for the BD FACSCanto II (−2.81 to 4.31 ±7.1) and BD FACSCalibur (−2.69 to 5.2 ±7.9). Results are reported as absolute and relative differences compared to the predicate for viable CD34+, percentage of CD34+ in CD45+, and viable CD45+ populations (or gates). Bias analyses of the distribution of the predicate low, mid, and high bin values were done using BD FACSCanto II optimal gating and BD FACSCalibur manual gating for viable CD34+, percentage of CD34+ in CD45+, and viable CD45+. Bias results from both investigational methods show agreement. Deming regression analyses showed a linear relationship with R2 >0.92 for both investigational methods. Discussion In conclusion, the results from both investigational methods demonstrated agreement and equivalence with the predicate method for enumeration of absolute viable CD34+, percentage of viable CD34+ in CD45+, and absolute viable CD45+ populations. PMID:24927716
Lin, Weilu; Wang, Zejian; Huang, Mingzhi; Zhuang, Yingping; Zhang, Siliang
2018-06-01
The isotopically non-stationary 13C labelling experiments, as an emerging experimental technique, can estimate the intracellular fluxes of the cell culture under an isotopic transient period. However, to the best of our knowledge, the issue of the structural identifiability analysis of non-stationary isotope experiments is not well addressed in the literature. In this work, the local structural identifiability analysis for non-stationary cumomer balance equations is conducted based on the Taylor series approach. The numerical rank of the Jacobian matrices of the finite extended time derivatives of the measured fractions with respect to the free parameters is taken as the criterion. It turns out that only one single time point is necessary to achieve the structural identifiability analysis of the cascaded linear dynamic system of non-stationary isotope experiments. The equivalence between the local structural identifiability of the cascaded linear dynamic systems and the local optimum condition of the nonlinear least squares problem is elucidated in the work. Optimal measurements sets can then be determined for the metabolic network. Two simulated metabolic networks are adopted to demonstrate the utility of the proposed method. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lancellotti, V.; de Hon, B. P.; Tijhuis, A. G.
2011-08-01
In this paper we present the application of linear embedding via Green's operators (LEGO) to the solution of the electromagnetic scattering from clusters of arbitrary (both conducting and penetrable) bodies randomly placed in a homogeneous background medium. In the LEGO method the objects are enclosed within simple-shaped bricks described in turn via scattering operators of equivalent surface current densities. Such operators have to be computed only once for a given frequency, and hence they can be re-used to perform the study of many distributions comprising the same objects located in different positions. The surface integral equations of LEGO are solved via the Moments Method combined with Adaptive Cross Approximation (to save memory) and Arnoldi basis functions (to compress the system). By means of purposefully selected numerical experiments we discuss the time requirements with respect to the geometry of a given distribution. Besides, we derive an approximate relationship between the (near-field) accuracy of the computed solution and the number of Arnoldi basis functions used to obtain it. This result endows LEGO with a handy practical criterion for both estimating the error and keeping it in check.
40 CFR 53.14 - Modification of a reference or equivalent method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Modification of a reference or equivalent method. 53.14 Section 53.14 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions...
40 CFR 53.8 - Designation of reference and equivalent methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Designation of reference and equivalent methods. 53.8 Section 53.8 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8...
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Cucinotta, F. A.; Wilson, J. W. (Principal Investigator)
1998-01-01
A matched set of five tissue-equivalent proportional counters (TEPCs), embedded at the centers of 0 (bare), 3, 5, 8 and 12-inch-diameter polyethylene spheres, were flown on the Shuttle flight STS-81 (inclination 51.65 degrees, altitude approximately 400 km). The data obtained were separated into contributions from trapped protons and galactic cosmic radiation (GCR). From the measured linear energy transfer (LET) spectra, the absorbed dose and dose-equivalent rates were calculated. The results were compared to calculations made with the radiation transport model HZETRN/NUCFRG2, using the GCR free-space spectra, orbit-averaged geomagnetic transmission function and Shuttle shielding distributions. The comparison shows that the model fits the dose rates to a root mean square (rms) error of 5%, and dose-equivalent rates to an rms error of 10%. Fairly good agreement between the LET spectra was found; however, differences are seen at both low and high LET. These differences can be understood as due to the combined effects of chord-length variation and detector response function. These results rule out a number of radiation transport/nuclear fragmentation models. Similar comparisons of trapped-proton dose rates were made between calculations made with the proton transport model BRYNTRN using the AP-8 MIN trapped-proton model and Shuttle shielding distributions. The predictions of absorbed dose and dose-equivalent rates are fairly good. However, the prediction of the LET spectra below approximately 30 keV/microm shows the need to improve the AP-8 model. These results have strong implications for shielding requirements for an interplanetary manned mission.
Development of a new lattice physics code robin for PWR application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Chen, G.
2013-07-01
This paper presents a description of methodologies and preliminary verification results of a new lattice physics code ROBIN, being developed for PWR application at Shanghai NuStar Nuclear Power Technology Co., Ltd. The methods used in ROBIN to fulfill various tasks of lattice physics analysis are an integration of historical methods and new methods that came into being very recently. Not only these methods like equivalence theory for resonance treatment and method of characteristics for neutron transport calculation are adopted, as they are applied in many of today's production-level LWR lattice codes, but also very useful new methods like the enhancedmore » neutron current method for Dancoff correction in large and complicated geometry and the log linear rate constant power depletion method for Gd-bearing fuel are implemented in the code. A small sample of verification results are provided to illustrate the type of accuracy achievable using ROBIN. It is demonstrated that ROBIN is capable of satisfying most of the needs for PWR lattice analysis and has the potential to become a production quality code in the future. (authors)« less
Quantile equivalence to evaluate compliance with habitat management objectives
Cade, Brian S.; Johnson, Pamela R.
2011-01-01
Equivalence estimated with linear quantile regression was used to evaluate compliance with habitat management objectives at Arapaho National Wildlife Refuge based on monitoring data collected in upland (5,781 ha; n = 511 transects) and riparian and meadow (2,856 ha, n = 389 transects) habitats from 2005 to 2008. Quantiles were used because the management objectives specified proportions of the habitat area that needed to comply with vegetation criteria. The linear model was used to obtain estimates that were averaged across 4 y. The equivalence testing framework allowed us to interpret confidence intervals for estimated proportions with respect to intervals of vegetative criteria (equivalence regions) in either a liberal, benefit-of-doubt or conservative, fail-safe approach associated with minimizing alternative risks. Simple Boolean conditional arguments were used to combine the quantile equivalence results for individual vegetation components into a joint statement for the multivariable management objectives. For example, management objective 2A required at least 809 ha of upland habitat with a shrub composition ≥0.70 sagebrush (Artemisia spp.), 20–30% canopy cover of sagebrush ≥25 cm in height, ≥20% canopy cover of grasses, and ≥10% canopy cover of forbs on average over 4 y. Shrub composition and canopy cover of grass each were readily met on >3,000 ha under either conservative or liberal interpretations of sampling variability. However, there were only 809–1,214 ha (conservative to liberal) with ≥10% forb canopy cover and 405–1,098 ha with 20–30%canopy cover of sagebrush ≥25 cm in height. Only 91–180 ha of uplands simultaneously met criteria for all four components, primarily because canopy cover of sagebrush and forbs was inversely related when considered at the spatial scale (30 m) of a sample transect. We demonstrate how the quantile equivalence analyses also can help refine the numerical specification of habitat objectives and explore specification of spatial scales for objectives with respect to sampling scales used to evaluate those objectives.
Sertić, Josip; Kozak, Dražan; Samardžić, Ivan
2014-01-01
The values of reaction forces in the boiler supports are the basis for the dimensioning of bearing steel structure of steam boiler. In this paper, the application of the method of equivalent stiffness of membrane wall is proposed for the calculation of reaction forces. The method of equalizing displacement, as the method of homogenization of membrane wall stiffness, was applied. On the example of "Milano" boiler, using the finite element method, the calculation of reactions in the supports for the real geometry discretized by the shell finite element was made. The second calculation was performed with the assumption of ideal stiffness of membrane walls and the third using the method of equivalent stiffness of membrane wall. In the third case, the membrane walls are approximated by the equivalent orthotropic plate. The approximation of membrane wall stiffness is achieved using the elasticity matrix of equivalent orthotropic plate at the level of finite element. The obtained results were compared, and the advantages of using the method of equivalent stiffness of membrane wall for the calculation of reactions in the boiler supports were emphasized.
Effective Hamiltonians for phosphorene and silicene
Lew Yan Voon, L. C.; Lopez-Bezanilla, A.; Wang, J.; ...
2015-02-04
Here, we derived the effective Hamiltonians for silicene and phosphorene with strain, electric field and magnetic field using the method of invariants. Our paper extends the work on silicene, and on phosphorene. Our Hamiltonians are compared to an equivalent one for graphene. For silicene, the expression for band warping is obtained analytically and found to be of different order than for graphene.We prove that a uniaxial strain does not open a gap, resolving contradictory numerical results in the literature. For phosphorene, it is shown that the bands near the Brillouin zone center only have terms in even powers of themore » wave vector.We predict that the energies change quadratically in the presence of a perpendicular external electric field but linearly in a perpendicular magnetic field, as opposed to those for silicene which vary linearly in both cases. Preliminary ab initio calculations for the intrinsic band structures have been carried out in order to evaluate some of the k · p parameters.« less
Evaluation of antioxidant capacity of Chinese five-spice ingredients.
Bi, Xinyan; Soong, Yean Yean; Lim, Siang Wee; Henry, Christiani Jeyakumar
2015-05-01
Phenolic compounds in spices were reportedly found to possess high antioxidant capacities (AOCs), which may prevent or reduce risk of human diseases such as cardiovascular disease, cancer and diabetes. The potential AOC of Chinese five-spice powder (consist of Szechuan pepper, fennel seed, cinnamon, star anise and clove) with varying proportion of individual spice ingredients was investigated through four standard methods. Our results suggest that clove is the major contributor to the AOC of the five-spice powder whereas the other four ingredients contribute to the flavour. For example, the total phenolic content as well as ferric reducing antioxidant power (FRAP), Trolox equivalent antioxidant capacity (TEAC) and oxygen radical absorbance capacity (ORAC) values increased linearly with the clove percentage in five-spice powder. This observation opens the door to use clove in other spice mixtures to increase their AOC and flavour. Moreover, linear relationships were also observed between AOC and the total phenolic content of the 32 tested spice samples.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Proton Linear Energy Transfer measurement using Emulsion Cloud Chamber
NASA Astrophysics Data System (ADS)
Shin, Jae-ik; Park, Seyjoon; Kim, Haksoo; Kim, Meyoung; Jeong, Chiyoung; Cho, Sungkoo; Lim, Young Kyung; Shin, Dongho; Lee, Se Byeong; Morishima, Kunihiro; Naganawa, Naotaka; Sato, Osamu; Kwak, Jungwon; Kim, Sung Hyun; Cho, Jung Sook; Ahn, Jung Keun; Kim, Ji Hyun; Yoon, Chun Sil; Incerti, Sebastien
2015-04-01
This study proposes to determine the correlation between the Volume Pulse Height (VPH) measured by nuclear emulsion and Linear Energy Transfer (LET) calculated by Monte Carlo simulation based on Geant4. The nuclear emulsion was irradiated at the National Cancer Center (NCC) with a therapeutic proton beam and was installed at 5.2 m distance from the beam nozzle structure with various thicknesses of water-equivalent material (PMMA) blocks to position with specific positions along the Bragg curve. After the beam exposure and development of the emulsion films, the films were scanned by S-UTS developed in Nagoya University. The proton tracks in the scanned films were reconstructed using the 'NETSCAN' method. Through this procedure, the VPH can be derived from each reconstructed proton track at each position along the Bragg curve. The VPH value indicates the magnitude of energy loss in proton track. By comparison with the simulation results obtained using Geant4, we found the correlation between the LET calculated by Monte Carlo simulation and the VPH measured by the nuclear emulsion.
An extension of the Laplace transform to Schwartz distributions
NASA Technical Reports Server (NTRS)
Price, D. R.
1974-01-01
A characterization of the Laplace transform is developed which extends the transform to the Schwartz distributions. The class of distributions includes the impulse functions and other singular functions which occur as solutions to ordinary and partial differential equations. The standard theorems on analyticity, uniqueness, and invertibility of the transform are proved by using the characterization as the definition of the Laplace transform. The definition uses sequences of linear transformations on the space of distributions which extends the Laplace transform to another class of generalized functions, the Mikusinski operators. It is shown that the sequential definition of the transform is equivalent to Schwartz' extension of the ordinary Laplace transform to distributions but, in contrast to Schwartz' definition, does not use the distributional Fourier transform. Several theorems concerning the particular linear transformations used to define the Laplace transforms are proved. All the results proved in one dimension are extended to the n-dimensional case, but proofs are presented only for those situations that require methods different from their one-dimensional analogs.
A phase space approach to wave propagation with dispersion.
Ben-Benjamin, Jonathan S; Cohen, Leon; Loughlin, Patrick J
2015-08-01
A phase space approximation method for linear dispersive wave propagation with arbitrary initial conditions is developed. The results expand on a previous approximation in terms of the Wigner distribution of a single mode. In contrast to this previously considered single-mode case, the approximation presented here is for the full wave and is obtained by a different approach. This solution requires one to obtain (i) the initial modal functions from the given initial wave, and (ii) the initial cross-Wigner distribution between different modal functions. The full wave is the sum of modal functions. The approximation is obtained for general linear wave equations by transforming the equations to phase space, and then solving in the new domain. It is shown that each modal function of the wave satisfies a Schrödinger-type equation where the equivalent "Hamiltonian" operator is the dispersion relation corresponding to the mode and where the wavenumber is replaced by the wavenumber operator. Application to the beam equation is considered to illustrate the approach.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Cardenas, Carlos E; Nitsch, Paige L; Kudchadker, Rajat J; Howell, Rebecca M; Kry, Stephen F
2016-07-08
Out-of-field doses from radiotherapy can cause harmful side effects or eventually lead to secondary cancers. Scattered doses outside the applicator field, neutron source strength values, and neutron dose equivalents have not been broadly investigated for high-energy electron beams. To better understand the extent of these exposures, we measured out-of-field dose characteristics of electron applicators for high-energy electron beams on two Varian 21iXs, a Varian TrueBeam, and an Elekta Versa HD operating at various energy levels. Out-of-field dose profiles and percent depth-dose curves were measured in a Wellhofer water phantom using a Farmer ion chamber. Neutron dose was assessed using a combination of moderator buckets and gold activation foils placed on the treatment couch at various locations in the patient plane on both the Varian 21iX and Elekta Versa HD linear accelerators. Our findings showed that out-of-field electron doses were highest for the highest electron energies. These doses typically decreased with increasing distance from the field edge but showed substantial increases over some distance ranges. The Elekta linear accelerator had higher electron out-of-field doses than the Varian units examined, and the Elekta dose profiles exhibited a second dose peak about 20 to 30 cm from central-axis, which was found to be higher than typical out-of-field doses from photon beams. Electron doses decreased sharply with depth before becoming nearly constant; the dose was found to decrease to a depth of approximately E(MeV)/4 in cm. With respect to neutron dosimetry, Q values and neutron dose equivalents increased with electron beam energy. Neutron contamination from electron beams was found to be much lower than that from photon beams. Even though the neutron dose equivalent for electron beams represented a small portion of neutron doses observed under photon beams, neutron doses from electron beams may need to be considered for special cases.
Dose Equivalents for Antipsychotic Drugs: The DDD Method.
Leucht, Stefan; Samara, Myrto; Heres, Stephan; Davis, John M
2016-07-01
Dose equivalents of antipsychotics are an important but difficult to define concept, because all methods have weaknesses and strongholds. We calculated dose equivalents based on defined daily doses (DDDs) presented by the World Health Organisation's Collaborative Center for Drug Statistics Methodology. Doses equivalent to 1mg olanzapine, 1mg risperidone, 1mg haloperidol, and 100mg chlorpromazine were presented and compared with the results of 3 other methods to define dose equivalence (the "minimum effective dose method," the "classical mean dose method," and an international consensus statement). We presented dose equivalents for 57 first-generation and second-generation antipsychotic drugs, available as oral, parenteral, or depot formulations. Overall, the identified equivalent doses were comparable with those of the other methods, but there were also outliers. The major strength of this method to define dose response is that DDDs are available for most drugs, including old antipsychotics, that they are based on a variety of sources, and that DDDs are an internationally accepted measure. The major limitations are that the information used to estimate DDDS is likely to differ between the drugs. Moreover, this information is not publicly available, so that it cannot be reviewed. The WHO stresses that DDDs are mainly a standardized measure of drug consumption, and their use as a measure of dose equivalence can therefore be misleading. We, therefore, recommend that if alternative, more "scientific" dose equivalence methods are available for a drug they should be preferred to DDDs. Moreover, our summary can be a useful resource for pharmacovigilance studies. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Enhancement of Electrical Conductivity in Multicomponent Nanocomposites.
NASA Astrophysics Data System (ADS)
Ni, Xiaojuan; Hui, Chao; Su, Ninghai; Liu, Feng
To date, very limited theoretical or numerical analyses have been carried out to understand the electrical percolation properties in multicomponent nanocomposite systems. In this work, a disk-stick percolation model was developed to investigate the electrical percolation behavior of an electrically insulating matrix reinforced with one-dimensional (1D) and two-dimensional (2D) conductors via Monte Carlo simulation. The effective electrical conductivity was evaluated through Kirchhoff's current law by transforming it into an equivalent resistor network. The percolation threshold, equivalent resistance and conductivity were obtained from the distribution of nodal voltages by solving a system of linear equations with Gaussian elimination method. The effects of size, aspect ratio, relative concentration and contact patterns of 1D/2D inclusions on conductivity performance were examined. Our model is able to predict the electrical percolation threshold and evaluate the conductivity for hybrid systems with multiple components. The results suggest that carbon-based nanocomposites can have a high potential for applications where favorable electrical properties and low specific weight are required. We acknowledge the financial support from DOE-BES (No. DE-FG02-04ER46148).
Nonlocal torque operators in ab initio theory of the Gilbert damping in random ferromagnetic alloys
NASA Astrophysics Data System (ADS)
Turek, I.; Kudrnovský, J.; Drchal, V.
2015-12-01
We present an ab initio theory of the Gilbert damping in substitutionally disordered ferromagnetic alloys. The theory rests on introduced nonlocal torques which replace traditional local torque operators in the well-known torque-correlation formula and which can be formulated within the atomic-sphere approximation. The formalism is sketched in a simple tight-binding model and worked out in detail in the relativistic tight-binding linear muffin-tin orbital method and the coherent potential approximation (CPA). The resulting nonlocal torques are represented by nonrandom, non-site-diagonal, and spin-independent matrices, which simplifies the configuration averaging. The CPA-vertex corrections play a crucial role for the internal consistency of the theory and for its exact equivalence to other first-principles approaches based on the random local torques. This equivalence is also illustrated by the calculated Gilbert damping parameters for binary NiFe and FeCo random alloys, for pure iron with a model atomic-level disorder, and for stoichiometric FePt alloys with a varying degree of L 10 atomic long-range order.
Impedimetric method for measuring ultra-low E. coli concentrations in human urine.
Settu, Kalpana; Chen, Ching-Jung; Liu, Jen-Tsai; Chen, Chien-Lung; Tsai, Jang-Zern
2015-04-15
In this study, we developed an interdigitated gold microelectrode-based impedance sensor to detect Escherichia coli (E. coli) in human urine samples for urinary tract infection (UTI) diagnosis. E. coli growth in human urine samples was successfully monitored during a 12-h culture, and the results showed that the maximum relative changes could be measured at 10Hz. An equivalent electrical circuit model was used for evaluating the variations in impedance characteristics of bacterial growth. The equivalent circuit analysis indicated that the change in impedance values at low frequencies was caused by double layer capacitance due to bacterial attachment and formation of biofilm on electrode surface in urine. A linear relationship between the impedance change and initial E. coli concentration was obtained with the coefficient of determination R(2)>0.90 at various growth times of 1, 3, 5, 7, 9 and 12h in urine. Thus our sensor is capable of detecting a wide range of E. coli concentration, 7×10(0) to 7×10(8) cells/ml, in urine samples with high sensitivity. Copyright © 2014 Elsevier B.V. All rights reserved.
21 CFR 610.9 - Equivalent methods and processes.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Equivalent methods and processes. 610.9 Section 610.9 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS General Provisions § 610.9 Equivalent methods and processes...
21 CFR 610.9 - Equivalent methods and processes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Equivalent methods and processes. 610.9 Section 610.9 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS General Provisions § 610.9 Equivalent methods and processes...
Revisiting the extended spring indices using gridded weather data and machine learning
NASA Astrophysics Data System (ADS)
Mehdipoor, Hamed; Izquierdo-Verdiguier, Emma; Zurita-Milla, Raul
2016-04-01
The extended spring indices or SI-x [1] have been successfully used to predict the timing of spring onset at continental scales. The SI-x models were created by combining lilac and honeysuckle volunteered phenological observations, temperature data (from weather stations) and latitudinal information. More precisely, these models use a linear regression to predict the day of year of first leaf and first bloom for these two indicator species. In this contribution we revisit both the data and the method used to calibrate the SI-x models to check whether the addition of new input data or the use of non-linear regression methods could lead to improments in the model outputs. In particular, we use a recently published dataset [2] of volunteered observations on cloned and common lilac over longer period of time (1980-2014) and we replace the weather station data by 54 features derived from Daymet [3], which provides 1 by 1 km gridded estimates of daily weather parameters (maximum and minimum temperatures, precipitation, water vapor pressure, solar radiation, day length, snow water equivalent) for North America. These features consist of both daily weather values and their long- and short-term accumulations and elevation. we also replace the original linear regression by a non-linear method. Specifically, we use random forests to both identify the most important features and to predict the day of year of the first leaf of cloned and common lilacs. Preliminary results confirm the importance of the SI-x features (maximum and minimum temperatures and day length). However, our results show that snow water equivalent and water vapor pressure are also necessary to properly model leaf onset. Regarding the predictions, our results indicate that Random Forests yield comparable results to those produced by the SI-x models (in terms of root mean square error -RMSE). For cloned and common lilac, the models predict the day of year of leafing with 16 and 15 days of accuracy respectively. Further research should focus on extensively comparing the features used by both modelling approaches and on analyzing spring onset patterns over continental United States. References 1. Schwartz, M.D., T.R. Ault, and J.L. Betancourt, Spring onset variations and trends in the continental United States: past and regional assessment using temperature-based indices. International Journal of Climatology, 2013. 33(13): p. 2917-2922. 2. Rosemartin, A.H., et al., Lilac and honeysuckle phenology data 1956-2014. Scientific Data, 2015. 2: p. 150038. 3. Thornton, P.E., et al. Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2. 2014.
There's a Green Glob in Your Classroom.
ERIC Educational Resources Information Center
Dugdale, Sharon
1983-01-01
Discusses computer games (called intrinsic models) focusing on mathematics rather than on unrelated motivations (flashing lights or sounds). Games include "Green Globs," (equations/linear functions), "Darts"/"Torpedo" (fractions), "Escape" (graphing), and "Make-a-Monster" (equivalent fractions and…
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
Testing the Einstein's equivalence principle with polarized gamma-ray bursts
NASA Astrophysics Data System (ADS)
Yang, Chao; Zou, Yuan-Chuan; Zhang, Yue-Yang; Liao, Bin; Lei, Wei-Hua
2017-07-01
The Einstein's equivalence principle can be tested by using parametrized post-Newtonian parameters, of which the parameter γ has been constrained by comparing the arrival times of photons with different energies. It has been constrained by a variety of astronomical transient events, such as gamma-ray bursts (GRBs), fast radio bursts as well as pulses of pulsars, with the most stringent constraint of Δγ ≲ 10-15. In this Letter, we consider the arrival times of lights with different circular polarization. For a linearly polarized light, it is the combination of two circularly polarized lights. If the arrival time difference between the two circularly polarized lights is too large, their combination may lose the linear polarization. We constrain the value of Δγp < 1.6 × 10-27 by the measurement of the polarization of GRB 110721A, which is the most stringent constraint ever achieved.
NASA Astrophysics Data System (ADS)
Lahaie, Sébastien; Parkes, David C.
We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.
Analysis and modeling of a family of two-transistor parallel inverters
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Wilson, T. G.
1973-01-01
A family of five static dc-to-square-wave inverters, each employing a square-loop magnetic core in conjunction with two switching transistors, is analyzed using piecewise-linear models for the nonlinear characteristics of the transistors, diodes, and saturable-core devices. Four of the inverters are analyzed in detail for the first time. These analyses show that, by proper choice of a frame of reference, each of the five quite differently appearing inverter circuits can be described by a common equivalent circuit. This equivalent circuit consists of a five-segment nonlinear resistor, a nonlinear saturable reactor, and a linear capacitor. Thus, by proper interpretation and identification of the parameters in the different circuits, the results of a detailed solution for one of the inverter circuits provide similar information and insight into the local and global behavior of each inverter in the family.
A linear polarization converter with near unity efficiency in microwave regime
NASA Astrophysics Data System (ADS)
Xu, Peng; Wang, Shen-Yun; Geyi, Wen
2017-04-01
In this paper, we present a linear polarization converter in the reflective mode with near unity conversion efficiency. The converter is designed in an array form on the basis of a pair of orthogonally arranged three-dimensional split-loop resonators sharing a common terminal coaxial port and a continuous metallic ground slab. It converts the linearly polarized incident electromagnetic wave at resonance to its orthogonal counterpart upon the reflection mode. The conversion mechanism is explained by an equivalent circuit model, and the conversion efficiency can be tuned by changing the impedance of the terminal port. Such a scheme of the linear polarization converter has potential applications in microwave communications, remote sensing, and imaging.
A canonical form of the equation of motion of linear dynamical systems
NASA Astrophysics Data System (ADS)
Kawano, Daniel T.; Salsa, Rubens Goncalves; Ma, Fai; Morzfeld, Matthias
2018-03-01
The equation of motion of a discrete linear system has the form of a second-order ordinary differential equation with three real and square coefficient matrices. It is shown that, for almost all linear systems, such an equation can always be converted by an invertible transformation into a canonical form specified by two diagonal coefficient matrices associated with the generalized acceleration and displacement. This canonical form of the equation of motion is unique up to an equivalence class for non-defective systems. As an important by-product, a damped linear system that possesses three symmetric and positive definite coefficients can always be recast as an undamped and decoupled system.
Acoustic energy transmission in cast iron pipelines
NASA Astrophysics Data System (ADS)
Kiziroglou, Michail E.; Boyle, David E.; Wright, Steven W.; Yeatman, Eric M.
2015-12-01
In this paper we propose acoustic power transfer as a method for the remote powering of pipeline sensor nodes. A theoretical framework of acoustic power propagation in the ceramic transducers and the metal structures is drawn, based on the Mason equivalent circuit. The effect of mounting on the electrical response of piezoelectric transducers is studied experimentally. Using two identical transducer structures, power transmission of 0.33 mW through a 1 m long, 118 mm diameter cast iron pipe, with 8 mm wall thickness is demonstrated, at 1 V received voltage amplitude. A near-linear relationship between input and output voltage is observed. These results show that it is possible to deliver significant power to sensor nodes through acoustic waves in solid structures. The proposed method may enable the implementation of acoustic - powered wireless sensor nodes for structural and operation monitoring of pipeline infrastructure.
Use of borated polyethylene to improve low energy response of a prompt gamma based neutron dosimeter
NASA Astrophysics Data System (ADS)
Priyada, P.; Ashwini, U.; Sarkar, P. K.
2016-05-01
The feasibility of using a combined sample of borated polyethylene and normal polyethylene to estimate neutron ambient dose equivalent from measured prompt gamma emissions is investigated theoretically to demonstrate improvements in low energy neutron dose response compared to only polyethylene. Monte Carlo simulations have been carried out using the FLUKA code to calculate the response of boron, hydrogen and carbon prompt gamma emissions to mono energetic neutrons. The weighted least square method is employed to arrive at the best linear combination of these responses that approximates the ICRP fluence to dose conversion coefficients well in the energy range of 10-8 MeV to 14 MeV. The configuration of the combined system is optimized through FLUKA simulations. The proposed method is validated theoretically with five different workplace neutron spectra with satisfactory outcome.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2006-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2005-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
The generation of gravitational waves. 2: The post-linear formalism revisited
NASA Technical Reports Server (NTRS)
Crowley, R. J.; Thorne, K. S.
1975-01-01
Two different versions of the Green's function for the scalar wave equation in weakly curved spacetime (one due to DeWitt and DeWitt, the other to Thorne and Kovacs) are compared and contrasted; and their mathematical equivalence is demonstrated. The DeWitt-DeWitt Green's function is used to construct several alternative versions of the Thorne-Kovacs post-linear formalism for gravitational-wave generation. Finally it is shown that, in calculations of gravitational bremsstrahlung radiation, some of our versions of the post-linear formalism allow one to treat the interacting bodies as point masses, while others do not.
Lin, Tiger W.; Das, Anup; Krishnan, Giri P.; Bazhenov, Maxim; Sejnowski, Terrence J.
2017-01-01
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005; Pillow et al., 2008), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals. PMID:28777719
Lin, Tiger W; Das, Anup; Krishnan, Giri P; Bazhenov, Maxim; Sejnowski, Terrence J
2017-10-01
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008 ), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005 ; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005 ; Pillow et al., 2008 ), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.
Li, Dan; Jiang, Jia; Han, Dandan; Yu, Xinyu; Wang, Kun; Zang, Shuang; Lu, Dayong; Yu, Aimin; Zhang, Ziwei
2016-04-05
A new method is proposed for measuring the antioxidant capacity by electron spin resonance spectroscopy based on the loss of electron spin resonance signal after Cu(2+) is reduced to Cu(+) with antioxidant. Cu(+) was removed by precipitation in the presence of SCN(-). The remaining Cu(2+) was coordinated with diethyldithiocarbamate, extracted into n-butanol and determined by electron spin resonance spectrometry. Eight standards widely used in antioxidant capacity determination, including Trolox, ascorbic acid, ferulic acid, rutin, caffeic acid, quercetin, chlorogenic acid, and gallic acid were investigated. The standard curves for determining the eight standards were plotted, and results showed that the linear regression correlation coefficients were all high enough (r > 0.99). Trolox equivalent antioxidant capacity values for the antioxidant standards were calculated, and a good correlation (r > 0.94) between the values obtained by the present method and cupric reducing antioxidant capacity method was observed. The present method was applied to the analysis of real fruit samples and the evaluation of the antioxidant capacity of these fruits.
Realization of the medium and high vacuum primary standard in CENAM, Mexico
NASA Astrophysics Data System (ADS)
Torres-Guzman, J. C.; Santander, L. A.; Jousten, K.
2005-12-01
A medium and high vacuum primary standard, based on the static expansion method, has been set up at Centro Nacional de Metrología (CENAM), Mexico. This system has four volumes and covers a measuring range of 1 × 10-5 Pa to 1 × 103 Pa of absolute pressure. As part of its realization, a characterization was performed, which included volume calibrations, several tests and a bilateral key comparison. To determine the expansion ratios, two methods were applied: the gravimetric method and the method with a linearized spinning rotor gauge. The outgassing ratios for the whole system were also determined. A comparison was performed with Physikalisch-Technische Bundesanstalt (comparison SIM-Euromet.M.P-BK3). By means of this comparison, a link has been achieved with the Euromet comparison (Euromet.M.P-K1.b). As a result, it is concluded that the value obtained at CENAM is equivalent to the Euromet reference value, and therefore the design, construction and operation of CENAM's SEE-1 vacuum primary standard were successful.
The dose-response of Harshaw TLD-700H.
Velbeck, K J; Luo, L Z; Ramlo, M J; Rotunda, J E
2006-01-01
Harshaw TLD-700H (7LiF:Mg,Cu,P) was previously characterised for low- to high-dose ranges from 1 microGy to 20 Gy. This paper describes the studies and results of dose-response and linearity at much higher doses. TLD-700H is a near perfect dosimetric material with near tissue equivalence, flat energy response, and the ability to measure beta, gamma and X rays. These new results extend the applicability of Harshaw TLD-700H into more dosimetric measurement environments. The simple glow curve structure provides insignificant fade, eliminating special oven preparation methods experienced by other materials. The work presented in this paper quantifies the performance of Harshaw TLD-700H in extended ranges.
Dwivedi, Prashant Povel; Choi, Hee Joo; Kim, Byoung Joo; Cha, Myoungsik
2013-12-16
Random duty-cycle errors (RDE) in ferroelectric quasi-phase-matching (QPM) devices not only affect the frequency conversion efficiency, but also generate non-phase-matched parasitic noise that can be detrimental to some applications. We demonstrate an accurate but simple method for measuring the RDE in periodically poled lithium niobate. Due to the equivalence between the undepleted harmonic generation spectrum and the diffraction pattern from the QPM grating, we employed linear diffraction measurement which is much simpler than tunable harmonic generation experiments [J. S. Pelc, et al., Opt. Lett.36, 864-866 (2011)]. As a result, we could relate the RDE for the QPM device to the relative noise intensity between the diffraction orders.
Coupled vibration of isotropic metal hollow cylinders with large geometrical dimensions
NASA Astrophysics Data System (ADS)
Lin, Shuyu
2007-08-01
In this paper, the coupled vibration of isotropic metal hollow cylinders with large geometrical dimensions is studied by using an approximate analytic method. According to this method, when the equivalent mechanical coupling coefficient that is defined as the stress ratio is introduced, the coupled vibration of a metal hollow cylinder is reduced to two equivalent one-dimensional vibrations, one is an equivalent longitudinal extensional vibration in the height direction of the cylinder, and the other is an equivalent plane radial vibration in the radius direction. These two equivalent vibrations are coupled to each other by the equivalent mechanical coupling coefficient. The resonance frequency equation of metal hollow cylinders in coupled vibration is derived and longitudinal and radial resonance frequencies are computed. For comparison, the resonance frequencies of the hollow cylinders are also computed by using numerical method. The analysis shows that the results from these two methods are in a good agreement with each other.
Knight, Michael J.; Smith-Collins, Adam; Newell, Sarah; Denbow, Mark; Kauppinen, Risto A.
2017-01-01
Background and Purpose Preterm birth is associated with worse neurodevelopmental outcome, but brain maturation in preterm infants is poorly characterised with standard methods. We evaluated white matter (WM) of infant brains at term-equivalent age, as a function of gestational age at birth, using multi-modal MRI. Methods Infants born very pre-term (< 32 weeks gestation) and late pre-term (33-36 weeks gestation) were scanned at 3T at term-equivalent age using diffusion tensor imaging (DTI) and T2 relaxometry. MRI data were analysed using tract-based spatial statistics, and anisotropy of T2 relaxation was also determined. Principal component analysis and linear discriminant analysis were applied to seek the variables best distinguishing very pre-term and late pre-term groups. Results Across widespread regions of WM, T2 is longer in very pre-term infants than in late pre-term ones. These effects are more prevalent in regions of WM which myelinate earlier and faster. Similar effects are obtained from DTI, showing that fractional anisotropy (FA) is lower and radial diffusivity higher in the very pre-term group, with a bias towards earlier myelinating regions. Discriminant analysis shows high sensitivity and specificity of combined T2 relaxometry and DTI for the detection of a distinct WM development pathway in very preterm infants. T2 relaxation is anisotropic, depending on the angle between WM fibre and magnetic field, and this effect is modulated by FA. Conclusions Combined T2 relaxometry and DTI characterises specific patterns of retarded WM maturation, at term equivalent age, in infants born very pre-term relative to late pre-term. PMID:29205635
Guevara, V R
2004-02-01
A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klink, W.H.; Wickramasekara, S., E-mail: wickrama@grinnell.edu; Department of Physics, Grinnell College, Grinnell, IA 50112
2014-01-15
In previous work we have developed a formulation of quantum mechanics in non-inertial reference frames. This formulation is grounded in a class of unitary cocycle representations of what we have called the Galilean line group, the generalization of the Galilei group that includes transformations amongst non-inertial reference frames. These representations show that in quantum mechanics, just as is the case in classical mechanics, the transformations to accelerating reference frames give rise to fictitious forces. A special feature of these previously constructed representations is that they all respect the non-relativistic equivalence principle, wherein the fictitious forces associated with linear acceleration canmore » equivalently be described by gravitational forces. In this paper we exhibit a large class of cocycle representations of the Galilean line group that violate the equivalence principle. Nevertheless the classical mechanics analogue of these cocycle representations all respect the equivalence principle. -- Highlights: •A formulation of Galilean quantum mechanics in non-inertial reference frames is given. •The key concept is the Galilean line group, an infinite dimensional group. •A large class of general cocycle representations of the Galilean line group is constructed. •These representations show violations of the equivalence principle at the quantum level. •At the classical limit, no violations of the equivalence principle are detected.« less
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
NASA Technical Reports Server (NTRS)
Summers, Geoffrey P.; Burke, Edward A.; Shapiro, Philip; Statler, Richard; Messenger, Scott R.; Walters, Robert J.
1994-01-01
It has been found useful in the past to use the concept of 'equivalent fluence' to compare the radiation response of different solar cell technologies. Results are usually given in terms of an equivalent 1 MeV electron or an equivalent 10 MeV proton fluence. To specify cell response in a complex space-radiation environment in terms of an equivalent fluence, it is necessary to measure damage coefficients for a number of representative electron and proton energies. However, at the last Photovoltaic Specialist Conference we showed that nonionizing energy loss (NIEL) could be used to correlate damage coefficients for protons, using measurements for GaAs as an example. This correlation means that damage coefficients for all proton energies except near threshold can be predicted from a measurement made at one particular energy. NIEL is the exact equivalent for displacement damage of linear energy transfer (LET) for ionization energy loss. The use of NIEL in this way leads naturally to the concept of 10 MeV equivalent proton fluence. The situation for electron damage is more complex, however. It is shown that the concept of 'displacement damage dose' gives a more general way of unifying damage coefficients. It follows that 1 MeV electron equivalent fluence is a special case of a more general quantity for unifying electron damage coefficients which we call the 'effective 1 MeV electron equivalent dose'.
Sertić, Josip; Kozak, Dražan; Samardžić, Ivan
2014-01-01
The values of reaction forces in the boiler supports are the basis for the dimensioning of bearing steel structure of steam boiler. In this paper, the application of the method of equivalent stiffness of membrane wall is proposed for the calculation of reaction forces. The method of equalizing displacement, as the method of homogenization of membrane wall stiffness, was applied. On the example of “Milano” boiler, using the finite element method, the calculation of reactions in the supports for the real geometry discretized by the shell finite element was made. The second calculation was performed with the assumption of ideal stiffness of membrane walls and the third using the method of equivalent stiffness of membrane wall. In the third case, the membrane walls are approximated by the equivalent orthotropic plate. The approximation of membrane wall stiffness is achieved using the elasticity matrix of equivalent orthotropic plate at the level of finite element. The obtained results were compared, and the advantages of using the method of equivalent stiffness of membrane wall for the calculation of reactions in the boiler supports were emphasized. PMID:24959612
Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Almeida, Túlia de Souza Botelho; Lourenço, Felipe Rebello
2016-05-01
Microbiological assays are widely used to estimate the relative potencies of antibiotics in order to guarantee the efficacy, safety, and quality of drug products. Despite of the advantages of turbidimetric bioassays when compared to other methods, it has limitations concerning the linearity and range of the dose-response curve determination. Here, we proposed to use partial least squares (PLS) regression to solve these limitations and to improve the prediction of relative potencies of antibiotics. Kinetic-reading microplate turbidimetric bioassays for apramacyin and vancomycin were performed using Escherichia coli (ATCC 8739) and Bacillus subtilis (ATCC 6633), respectively. Microbial growths were measured as absorbance up to 180 and 300min for apramycin and vancomycin turbidimetric bioassays, respectively. Conventional dose-response curves (absorbances or area under the microbial growth curve vs. log of antibiotic concentration) showed significant regression, however there were significant deviation of linearity. Thus, they could not be used for relative potency estimations. PLS regression allowed us to construct a predictive model for estimating the relative potencies of apramycin and vancomycin without over-fitting and it improved the linear range of turbidimetric bioassay. In addition, PLS regression provided predictions of relative potencies equivalent to those obtained from agar diffusion official methods. Therefore, we conclude that PLS regression may be used to estimate the relative potencies of antibiotics with significant advantages when compared to conventional dose-response curve determination. Copyright © 2016 Elsevier B.V. All rights reserved.
Analytical parameters of the microplate-based ORAC-pyrogallol red assay.
Ortiz, Rocío; Antilén, Mónica; Speisky, Hernán; Aliaga, Margarita E; López-Alarcón, Camilo
2011-01-01
The analytical parameters of the microplate-based oxygen radicals absorbance capacity (ORAC) method using pyrogallol red (PGR) as probe (ORAC-PGR) are presented. In addition, the antioxidant capacity of commercial beverages, such as wines, fruit juices, and iced teas, is estimated. A good linearity of the area under the curve (AUC) versus Trolox concentration plots was obtained [AUC = (845 +/- 110) + (23 +/- 2) [Trolox, microM], R = 0.9961, n = 19]. QC experiments showed better precision and accuracy at the highest Trolox concentration (40 microM) with RSD and REC (recuperation) values of 1.7 and 101.0%, respectively. When red wine was used as sample, the method also showed good linearity [AUC = (787 +/- 77) + (690 +/- 60) [red wine, microL/mL]; R = 0.9926, n = 17], precision and accuracy with RSD values from 1.4 to 8.3%, and REC values that ranged from 89.7 to 103.8%. Additivity assays using solutions containing gallic acid and Trolox (or red wine) showed an additive protection of PGR given by the samples. Red wines showed higher ORAC-PGR values than white wines, while the ORAC-PGR index of fruit juices and iced teas presented a great variability, ranging from 0.6 to 21.6 mM of Trolox equivalents. This variability was also observed for juices of the same fruit, showing the influence of the brand on the ORAC-PGR index. The ORAC-PGR methodology can be applied in a microplate reader with good linearity, precision, and accuracy.
Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen
2010-03-01
The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.
Ionizing radiation measurements on LDEF: A0015 Free flyer biostack experiment
NASA Technical Reports Server (NTRS)
Benton, E. V.; Frank, A. L.; Benton, E. R.; Csige, I.; Frigo, L. A.
1995-01-01
This report covers the analysis of passive radiation detectors flown as part of the A0015 Free Flyer Biostack on LDEF (Long Duration Exposure Facility). LET (linear energy transfer) spectra and track density measurements were made with CR-39 and Polycarbonate plastic nuclear track detectors. Measurements of total absorbed dose were carried out using Thermoluminescent Detectors. Thermal and resonance neutron dose equivalents were measured with LiF/CR-39 detectors. High energy neutron and proton dose equivalents were measured with fission foil/CR-39 detectors.
Investigation of Cepstrum Analysis for Seismic/Acoustic Signal Sensor Range Determination.
1981-01-01
distorted by transmission through a linear system . For example, the effect of multipath and reverberation may be modeled in terms of a signal that is...called the short time averaged cepstrum. To derive some analytical expressions for short time average cepstrums we choose some functions of interest...linear process applied to the time series or any equivalent time function Repiod Period The amount of time required for one cycle of a time series Saphe
2003-01-01
ambient conditions prior to testing. A masterbatch for hydrosilylation-curable model systems was prepared by combining 200 g of hexamethydisilazane treated...fumed silica and 800 g of vinylterminated polydimethylsiloxane (equivalent weight ¼ 4111). The masterbatch was combined with additional vinyl polymer...followed by 10ml of Karstedt’s catalyst (10.9% Pt, 4.8mmol Pt). The amounts of masterbatch , linear vinyl, linear hydride, and crosslinkable hydride
Agent based reasoning for the non-linear stochastic models of long-range memory
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Gontis, V.
2012-02-01
We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.
Kim, Changsun; Kim, Hansol
2017-12-09
Comparing a point-of-care (POC) test using the capillary blood obtained from skin puncture with conventional laboratory tests. In this study, which was conducted at the emergency department of a tertiary care hospital in April-July 2017, 232 patients were enrolled, and three types of blood samples (capillary blood from skin puncture, arterial and venous blood from blood vessel puncture) were simultaneously collected. Each blood sample was analyzed using a POC analyzer (epoc® system, USA), an arterial blood gas analyzer (pHOx®Ultra, Nova biomedical, USA) and venous blood analyzers (AU5800, DxH2401, Beckman Coulter, USA). Twelve parameters were compared between the epoc and reference analyzers, with an equivalence test, Bland-Altman plot analysis and linear regression employed to show the agreement or correlation between the two methods. The pH, HCO 3 , Ca 2+ , Na + , K + , Cl - , glucose, Hb and Hct measured by the epoc were equivalent to the reference values (95% confidence interval of mean difference within the range of the agreement target) with clinically inconsequential mean differences and narrow limits of agreement. All of them, except pH, had clinically acceptable agreements between the two methods (results within target value ≥80%). Of the remaining three parameters (pCO 2, pO 2 and lactate), the epoc pCO 2 and lactate values were highly correlated with the reference device values, whereas pO 2 was not. (pCO 2 : R 2 =0.824, y=-1.411+0.877·x; lactate: R 2 =0.902, y=-0.544+0.966·x; pO 2 : R 2 =0.037, y=61.6+0.431·x). Most parameters, except only pO 2 , measured by the epoc were equivalent to or correlated with those from the reference method. Copyright © 2017 Elsevier Inc. All rights reserved.
Obuchowski, N A
2001-10-15
Electronic medical images are an efficient and convenient format in which to display, store and transmit radiographic information. Before electronic images can be used routinely to screen and diagnose patients, however, it must be shown that readers have the same diagnostic performance with this new format as traditional hard-copy film. Currently, there exist no suitable definitions of diagnostic equivalence. In this paper we propose two criteria for diagnostic equivalence. The first criterion ('population equivalence') considers the variability between and within readers, as well as the mean reader performance. This criterion is useful for most applications. The second criterion ('individual equivalence') involves a comparison of the test results for individual patients and is necessary when patients are followed radiographically over time. We present methods for testing both individual and population equivalence. The properties of the proposed methods are assessed in a Monte Carlo simulation study. Data from a mammography screening study is used to illustrate the proposed methods and compare them with results from more conventional methods of assessing equivalence and inter-procedure agreement. Copyright 2001 John Wiley & Sons, Ltd.
Guo, Ting; Winterburn, Julie L; Pipitone, Jon; Duerden, Emma G; Park, Min Tae M; Chau, Vann; Poskitt, Kenneth J; Grunau, Ruth E; Synnes, Anne; Miller, Steven P; Mallar Chakravarty, M
2015-01-01
The hippocampus, a medial temporal lobe structure central to learning and memory, is particularly vulnerable in preterm-born neonates. To date, segmentation of the hippocampus for preterm-born neonates has not yet been performed early-in-life (shortly after birth when clinically stable). The present study focuses on the development and validation of an automatic segmentation protocol that is based on the MAGeT-Brain (Multiple Automatically Generated Templates) algorithm to delineate the hippocampi of preterm neonates on their brain MRIs acquired at not only term-equivalent age but also early-in-life. First, we present a three-step manual segmentation protocol to delineate the hippocampus for preterm neonates and apply this protocol on 22 early-in-life and 22 term images. These manual segmentations are considered the gold standard in assessing the automatic segmentations. MAGeT-Brain, automatic hippocampal segmentation pipeline, requires only a small number of input atlases and reduces the registration and resampling errors by employing an intermediate template library. We assess the segmentation accuracy of MAGeT-Brain in three validation studies, evaluate the hippocampal growth from early-in-life to term-equivalent age, and study the effect of preterm birth on the hippocampal volume. The first experiment thoroughly validates MAGeT-Brain segmentation in three sets of 10-fold Monte Carlo cross-validation (MCCV) analyses with 187 different groups of input atlases and templates. The second experiment segments the neonatal hippocampi on 168 early-in-life and 154 term images and evaluates the hippocampal growth rate of 125 infants from early-in-life to term-equivalent age. The third experiment analyzes the effect of gestational age (GA) at birth on the average hippocampal volume at early-in-life and term-equivalent age using linear regression. The final segmentations demonstrate that MAGeT-Brain consistently provides accurate segmentations in comparison to manually derived gold standards (mean Dice's Kappa > 0.79 and Euclidean distance <1.3 mm between centroids). Using this method, we demonstrate that the average volume of the hippocampus is significantly different (p < 0.0001) in early-in-life (621.8 mm(3)) and term-equivalent age (958.8 mm(3)). Using these differences, we generalize the hippocampal growth rate to 38.3 ± 11.7 mm(3)/week and 40.5 ± 12.9 mm(3)/week for the left and right hippocampi respectively. Not surprisingly, younger gestational age at birth is associated with smaller volumes of the hippocampi (p = 0.001). MAGeT-Brain is capable of segmenting hippocampi accurately in preterm neonates, even at early-in-life. Hippocampal asymmetry with a larger right side is demonstrated on early-in-life images, suggesting that this phenomenon has its onset in the 3rd trimester of gestation. Hippocampal volume assessed at the time of early-in-life and term-equivalent age is linearly associated with GA at birth, whereby smaller volumes are associated with earlier birth.
Review of Recent Development of Dynamic Wind Farm Equivalent Models Based on Big Data Mining
NASA Astrophysics Data System (ADS)
Wang, Chenggen; Zhou, Qian; Han, Mingzhe; Lv, Zhan’ao; Hou, Xiao; Zhao, Haoran; Bu, Jing
2018-04-01
Recently, the big data mining method has been applied in dynamic wind farm equivalent modeling. In this paper, its recent development with present research both domestic and overseas is reviewed. Firstly, the studies of wind speed prediction, equivalence and its distribution in the wind farm are concluded. Secondly, two typical approaches used in the big data mining method is introduced, respectively. For single wind turbine equivalent modeling, it focuses on how to choose and identify equivalent parameters. For multiple wind turbine equivalent modeling, the following three aspects are concentrated, i.e. aggregation of different wind turbine clusters, the parameters in the same cluster, and equivalence of collector system. Thirdly, an outlook on the development of dynamic wind farm equivalent models in the future is discussed.
Equivalent Quantum Equations in a System Inspired by Bouncing Droplets Experiments
NASA Astrophysics Data System (ADS)
Borghesi, Christian
2017-07-01
In this paper we study a classical and theoretical system which consists of an elastic medium carrying transverse waves and one point-like high elastic medium density, called concretion. We compute the equation of motion for the concretion as well as the wave equation of this system. Afterwards we always consider the case where the concretion is not the wave source any longer. Then the concretion obeys a general and covariant guidance formula, which leads in low-velocity approximation to an equivalent de Broglie-Bohm guidance formula. The concretion moves then as if exists an equivalent quantum potential. A strictly equivalent free Schrödinger equation is retrieved, as well as the quantum stationary states in a linear or spherical cavity. We compute the energy (and momentum) of the concretion, naturally defined from the energy (and momentum) density of the vibrating elastic medium. Provided one condition about the amplitude of oscillation is fulfilled, it strikingly appears that the energy and momentum of the concretion not only are written in the same form as in quantum mechanics, but also encapsulate equivalent relativistic formulas.
NASA Astrophysics Data System (ADS)
Engdahl, N.
2017-12-01
Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.
Minkkinen, Mikko; Nieminen, Tuomo; Verrier, Richard L; Leino, Johanna; Lehtimäki, Terho; Viik, Jari; Lehtinen, Rami; Nikus, Kjell; Kööbi, Tiit; Turjanmaa, Väinö; Kähönen, Mika
2015-09-01
Exercise capacity, heart rate recovery and T-wave alternans are independent predictors of cardiovascular mortality. We tested whether these parameters contain supplementary prognostic information. A total of 3609 consecutive patients (2157 men) referred for a routine, clinically indicated bicycle exercise test were enrolled in the Finnish Cardiovascular Study (FINCAVAS). Exercise capacity was measured in metabolic equivalents, heart rate recovery as the decrease in heart rate from maximum to one minute post-exercise, and T-wave alternans by time-domain Modified Moving Average method. During 57-month median follow-up (interquartile range 35-78 months), 96 patients died of cardiovascular causes (primary endpoint) and 233 from any cause. All three parameters were independent predictors of cardiovascular mortality when analysed as continuous variables. Adding metabolic equivalents (p < 0.001), heart rate recovery (p = 0.002) or T-wave alternans (p = 0.01) to the linear model improved its predictive power for cardiovascular mortality. The combination of low exercise capacity (<6 metabolic equivalents), reduced heart rate recovery (≤12 beats/min) and elevated T-wave alternans (≥60 μV) yielded the highest hazard ratio for cardiovascular mortality of 16.5 (95% confidence interval 4.0-67.7, p < 0.001). Harrell's C index was 0.719 (confidence interval 0.665-0.772) for cardiovascular mortality with previously defined cutpoints (<8 units for metabolic equivalents, ≤18 beats/min for heart rate recovery and ≥60 μV for T-wave alternans). The prognostic capacity of the clinical exercise test is enhanced by combined analysis of exercise capacity, heart rate recovery and T-wave alternans. © The European Society of Cardiology 2014.
Linear decentralized systems with special structure. [for twin lift helicopters
NASA Technical Reports Server (NTRS)
Martin, C. F.
1982-01-01
Certain fundamental structures associated with linear systems having internal symmetries are outlined. It is shown that the theory of finite-dimensional algebras and their representations are closely related to such systems. It is also demonstrated that certain problems in the decentralized control of symmetric systems are equivalent to long-standing problems of linear systems theory. Even though the structure imposed arose in considering the problems of twin-lift helicopters, any large system composed of several identical intercoupled control systems can be modeled by a linear system that satisfies the constraints imposed. Internal symmetry can be exploited to yield new system-theoretic invariants and a better understanding of the way in which the underlying structure affects overall system performance.
Comparison of commercial systems for extraction of nucleic acids from DNA/RNA respiratory pathogens.
Yang, Genyan; Erdman, Dean E; Kodani, Maja; Kools, John; Bowen, Michael D; Fields, Barry S
2011-01-01
This study compared six automated nucleic acid extraction systems and one manual kit for their ability to recover nucleic acids from human nasal wash specimens spiked with five respiratory pathogens, representing Gram-positive bacteria (Streptococcus pyogenes), Gram-negative bacteria (Legionella pneumophila), DNA viruses (adenovirus), segmented RNA viruses (human influenza virus A), and non-segmented RNA viruses (respiratory syncytial virus). The robots and kit evaluated represent major commercially available methods that are capable of simultaneous extraction of DNA and RNA from respiratory specimens, and included platforms based on magnetic-bead technology (KingFisher mL, Biorobot EZ1, easyMAG, KingFisher Flex, and MagNA Pure Compact) or glass fiber filter technology (Biorobot MDX and the manual kit Allprep). All methods yielded extracts free of cross-contamination and RT-PCR inhibition. All automated systems recovered L. pneumophila and adenovirus DNA equivalently. However, the MagNA Pure protocol demonstrated more than 4-fold higher DNA recovery from the S. pyogenes than other methods. The KingFisher mL and easyMAG protocols provided 1- to 3-log wider linearity and extracted 3- to 4-fold more RNA from the human influenza virus and respiratory syncytial virus. These findings suggest that systems differed in nucleic acid recovery, reproducibility, and linearity in a pathogen specific manner. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzmina, L.K.
The research deals with different aspects of mathematical modelling and the analysis of complex dynamic non-linear systems as a consequence of applied problems in mechanics (in particular those for gyrosystems, for stabilization and orientation systems, control systems of movable objects, including the aviation and aerospace systems) Non-linearity, multi-connectedness and high dimensionness of dynamical problems, that occur at the initial full statement lead to the need of the problem narrowing, and of the decomposition of the full model, but with safe-keeping of main properties and of qualitative equivalence. The elaboration of regular methods for modelling problems in dynamics, the generalization ofmore » reduction principle are the main aims of the investigations. Here, uniform methodology, based on Lyapunov`s methods, founded by N.G.Ohetayev, is developed. The objects of the investigations are considered with exclusive positions, as systems of singularly perturbed class, treated as ones with singular parametrical perturbations. It is the natural extension of the statements of N.G.Chetayev and P.A.Kuzmin for parametrical stability. In paper the systematical procedures for construction of correct simplified models (comparison ones) are developed, the validity conditions of the transition are determined the appraisals are received, the regular algorithms of engineering level are obtained. Applicabilitelly to the stabilization and orientation systems with the gyroscopic controlling subsystems, these methods enable to build the hierarchical sequence of admissible simplified models; to determine the conditions of their correctness.« less
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
van der Voet, Hilko; Goedhart, Paul W; Schmidt, Kerstin
2017-11-01
An equivalence testing method is described to assess the safety of regulated products using relevant data obtained in historical studies with assumedly safe reference products. The method is illustrated using data from a series of animal feeding studies with genetically modified and reference maize varieties. Several criteria for quantifying equivalence are discussed, and study-corrected distribution-wise equivalence is selected as being appropriate for the example case study. An equivalence test is proposed based on a high probability of declaring equivalence in a simplified situation, where there is no between-group variation, where the historical and current studies have the same residual variance, and where the current study is assumed to have a sample size as set by a regulator. The method makes use of generalized fiducial inference methods to integrate uncertainties from both the historical and the current data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Parvini, T. S.; Tehranchi, M. M.; Hamidi, S. M.
2017-07-01
An effective method is proposed to design finite one-dimensional photonic crystal cavities (PhCCs) as robust high-efficient frequency converter. For this purpose, we consider two groups of PhCCs which are constructed by stacking m nonlinear (LiNbO3) and n linear (air) layers with variable thicknesses. In the first group, the number of linear layers is less than the nonlinear layers by one and in the second group by two. The conversion efficiency is calculated as a function of the arrangement and thicknesses of the linear and nonlinear layers by benefiting from nonlinear transfer matrix method. Our numerical simulations show that for each group of PhCCs, there is a structural formula by which the configurations with the highest efficiency can be constructed for any values of m and n (i.e. any number of layers). The efficient configurations are equivalent to Fabry-Pérot cavities that depend on the relationship between m and n and the mirrors in two sides of these cavities can be periodic or nonperiodic. The conversion efficiencies of these designed PhCCs are more than 5 orders of magnitude higher than the perfect ones which satisfy photonic bandgap edge and quasi-phase matching. Moreover, the results reveal that conversion efficiencies of Fabry-Pérot cavities with non-periodic mirrors are one order of magnitude higher than those with periodic mirrors. The major physical mechanisms of the enhancement are quasi-phase matching effect, cavity effect induced by dispersive mirrors, and double resonance for the pump and the harmonic fields in defect state. We believe that this method is very beneficial to the design of high-efficient compact optical frequency converters.
NASA Astrophysics Data System (ADS)
Zheng, Youqi; Choi, Sooyoung; Lee, Deokjung
2017-12-01
A new approach based on the method of characteristics (MOC) is proposed to solve the neutron transport equation. A new three-dimensional (3D) spatial discretization is applied to avoid the instability issue of the transverse leakage iteration of the traditional 2D/1D approach. In this new approach, the axial and radial variables are discretized in two different ways: the linear expansion is performed in the axial direction, then, the 3D solution of the angular flux is transformed to be the planar solution of 2D angular expansion moments, which are solved by the planar MOC sweeping. Based on the boundary and interface continuity conditions, the 2D expansion moment solution is equivalently transformed to be the solution of the axially averaged angular flux. Using the piecewise averaged angular flux at the top and bottom surfaces of 3D meshes, the planes are coupled to give the 3D angular flux distribution. The 3D CMFD linear system is established from the surface net current of every 3D pin-mesh to accelerate the convergence of power iteration. The STREAM code is extended to be capable of handling 3D problems based on the new approach. Several benchmarks are tested to verify its feasibility and accuracy, including the 3D homogeneous benchmarks and heterogeneous benchmarks. The computational sensitivity is discussed. The results show good accuracy in all tests. With the CMFD acceleration, the convergence is stable. In addition, a pin-cell problem with void gap is calculated. This shows the advantage compared to the traditional 2D/1D MOC methods.
NASA Astrophysics Data System (ADS)
Petric, Martin Peter
This thesis describes the development and implementation of a novel method for the dosimetric verification of intensity modulated radiation therapy (IMRT) fields with several advantages over current techniques. Through the use of a tissue equivalent plastic scintillator sheet viewed by a charge-coupled device (CCD) camera, this method provides a truly tissue equivalent dosimetry system capable of efficiently and accurately performing field-by-field verification of IMRT plans. This work was motivated by an initial study comparing two IMRT treatment planning systems. The clinical functionality of BrainLAB's BrainSCAN and Varian's Helios IMRT treatment planning systems were compared in terms of implementation and commissioning, dose optimization, and plan assessment. Implementation and commissioning revealed differences in the beam data required to characterize the beam prior to use with the BrainSCAN system requiring higher resolution data compared to Helios. This difference was found to impact on the ability of the systems to accurately calculate dose for highly modulated fields, with BrainSCAN being more successful than Helios. The dose optimization and plan assessment comparisons revealed that while both systems use considerably different optimization algorithms and user-control interfaces, they are both capable of producing substantially equivalent dose plans. The extensive use of dosimetric verification techniques in the IMRT treatment planning comparison study motivated the development and implementation of a novel IMRT dosimetric verification system. The system consists of a water-filled phantom with a tissue equivalent plastic scintillator sheet built into the top surface. Scintillation light is reflected by a plastic mirror within the phantom towards a viewing window where it is captured using a CCD camera. Optical photon spread is removed using a micro-louvre optical collimator and by deconvolving a glare kernel from the raw images. Characterization of this new dosimetric verification system indicates excellent dose response and spatial linearity, high spatial resolution, and good signal uniformity and reproducibility. Dosimetric results from square fields, dynamic wedged fields, and a 7-field head and neck IMRT treatment plan indicate good agreement with film dosimetry distributions. Efficiency analysis of the system reveals a 50% reduction in time requirements for field-by-field verification of a 7-field IMRT treatment plan compared to film dosimetry.
Exact folded-band chaotic oscillator.
Corron, Ned J; Blakely, Jonathan N
2012-06-01
An exactly solvable chaotic oscillator with folded-band dynamics is shown. The oscillator is a hybrid dynamical system containing a linear ordinary differential equation and a nonlinear switching condition. Bounded oscillations are provably chaotic, and successive waveform maxima yield a one-dimensional piecewise-linear return map with segments of both positive and negative slopes. Continuous-time dynamics exhibit a folded-band topology similar to Rössler's oscillator. An exact solution is written as a linear convolution of a fixed basis pulse and a discrete binary sequence, from which an equivalent symbolic dynamics is obtained. The folded-band topology is shown to be dependent on the symbol grammar.
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Wilson, T. G.
1974-01-01
A family of four dc-to-square-wave LC tuned inverters are analyzed using singular point. Limit cycles and waveshape characteristics are given for three modes of oscillation: quasi-harmonic, relaxation, and discontinuous. An inverter in which the avalanche breakdown of the transistor emitter-to-base junction occurs is discussed and the starting characteristics of this family of inverters are presented. The LC tuned inverters are shown to belong to a family of inverters with a common equivalent circuit consisting of only three 'series' elements: a five-segment piecewise-linear current-controlled resistor, linear inductor, and linear capacitor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fager, Marcus, E-mail: Marcus.Fager@UPHS.UPenn.edu; Medical Radiation Physics, Stockholm University, Stockholm; Toma-Dasu, Iuliana
Purpose: The purpose of this study was to propose a proton treatment planning method that trades physical dose (D) for dose-averaged linear energy transfer (LET{sub d}) while keeping the radiobiologically weighted dose (D{sub RBE}) to the target the same. Methods and Materials: The target is painted with LET{sub d} by using 2, 4, and 7 fields aimed at the proximal segment of the target (split target planning [STP]). As the LET{sub d} within the target increases with increasing number of fields, D decreases to maintain the D{sub RBE} the same as the conventional treatment planning method by using beams treatingmore » the full target (full target planning [FTP]). Results: The LET{sub d} increased 61% for 2-field STP (2STP) compared to FTP, 72% for 4STP, and 82% for 7STP inside the target. This increase in LET{sub d} led to a decrease of D with 5.3 ± 0.6 Gy for 2STP, 4.4 ± 0.7 Gy for 4STP, and 5.3 ± 1.1 Gy for 7STP, keeping the DRBE at 90% of the volume (DRBE, 90) constant to FTP. Conclusions: LET{sub d} painting offers a method to reduce prescribed dose at no cost to the biological effectiveness of the treatment.« less
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Wilson, J. W.
2003-01-01
The tissue equivalent proportional counter had the purpose of providing the energy absorbed from a radiation field and an estimate of the corresponding linear energy transfer (LET) for evaluation of radiation quality to convert to dose equivalent. It was the recognition of the limitations in estimating LET which lead to a new approach to dosimetry, microdosimetry, and the corresponding emphasis on energy deposit in a small tissue volume as the driver of biological response with the defined quantity of lineal energy. In many circumstances, the average of the lineal energy and LET are closely related and has provided a basis for estimating dose equivalent. Still in many cases the lineal is poorly related to LET and brings into question the usefulness as a general purpose device. These relationships are examined in this paper.
A decentralized linear quadratic control design method for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1990-01-01
A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass and stiffness properties.
Approaches to linear local gauge-invariant observables in inflationary cosmologies
NASA Astrophysics Data System (ADS)
Fröb, Markus B.; Hack, Thomas-Paul; Khavkine, Igor
2018-06-01
We review and relate two recent complementary constructions of linear local gauge-invariant observables for cosmological perturbations in generic spatially flat single-field inflationary cosmologies. After briefly discussing their physical significance, we give explicit, covariant and mutually invertible transformations between the two sets of observables, thus resolving any doubts about their equivalence. In this way, we get a geometric interpretation and show the completeness of both sets of observables, while previously each of these properties was available only for one of them.
Application of Logic to Integer Sequences: A Survey
NASA Astrophysics Data System (ADS)
Makowsky, Johann A.
Chomsky and Schützenberger showed in 1963 that the sequence d L (n), which counts the number of words of a given length n in a regular language L, satisfies a linear recurrence relation with constant coefficients for n, or equivalently, the generating function g_L(x)=sumn d_L(n) x^n is a rational function. In this talk we survey results concerning sequences a(n) of natural numbers which satisfy linear recurrence relations over ℤ or ℤ m , and
Detection of Bioaerosols Using Single Particle Thermal Emission Spectroscopy (First-year Report)
2012-02-01
cooled MCT detector with a noise equivalent power (NEP) of 7x10(–13) W/Hz, yields a detection S/N > 13 (assuming a sufficiently cooled background). We...dispersively resolved using 190-mm Horiba spectrometer that houses a time-gated 32-element mercury cadmium telluride ( MCT ) linear array. In this report...to 10.0 ms. Minimum integration (and readout) periods for the time-gated 32-element mercury cadmium telluride ( MCT ) linear array are 10 µs. Based
Some New Results in Astrophysical Problems of Nonlinear Theory of Radiative Transfer
NASA Astrophysics Data System (ADS)
Pikichyan, H. V.
2017-07-01
In the interpretation of the observed astrophysical spectra, a decisive role is related to nonlinear problems of radiative transfer, because the processes of multiple interactions of matter of cosmic medium with the exciting intense radiation ubiquitously occur in astrophysical objects, and in their vicinities. Whereas, the intensity of the exciting radiation changes the physical properties of the original medium, and itself was modified, simultaneously, in a self-consistent manner under its influence. In the present report, we show that the consistent application of the principle of invariance in the nonlinear problem of bilateral external illumination of a scattering/absorbing one-dimensional anisotropic medium of finite geometrical thickness allows for simplifications that were previously considered as a prerogative only of linear problems. The nonlinear problem is analyzed through the three methods of the principle of invariance: (i) an adding of layers, (ii) its limiting form, described by differential equations of invariant imbedding, and (iii) a transition to the, so-called, functional equations of the "Ambartsumyan's complete invariance". Thereby, as an alternative to the Boltzmann equation, a new type of equations, so-called "kinetic equations of equivalence", are obtained. By the introduction of new functions - the so-called "linear images" of solution of nonlinear problem of radiative transfer, the linear structure of the solution of the nonlinear problem under study is further revealed. Linear images allow to convert naturally the statistical characteristics of random walk of a "single quantum" or their "beam of unit intensity", as well as widely known "probabilistic interpretation of phenomena of transfer", to the field of nonlinear problems. The structure of the equations obtained for determination of linear images is typical of linear problems.
NASA Astrophysics Data System (ADS)
Farmer, Jenny; Manning, Frances; Smith, Jo; Arn Teh, Yit
2017-04-01
The effects of drainage and deforestation of South East Asian peat swamp forests for the development of oil palm plantations has received considerable attention in both mainstream media and academia, and is the source of significant discussion and debate. However, data on the long-term carbon losses from these peat soils as a result of this land use change is still limited and the methods with which to collect this data are still developing. Here we present the ongoing evolution and implementation of a method for separating autotrophic and heterotrophic respiration by sampling carbon dioxide emissions at increasing distance from palm trees. We present the limitations of the method, modelling approaches and results from our studies. In 2011 we trialled this method in Sumatra, Indonesia and collected rate measurements over a six day period in three ages of oil palm. In the four year oil palm site there were thirteen collars that had no roots present and from these the peat based carbon losses were recorded to be 0.44 g CO2 m2 hr-1 [0.34; 0.57] (equivalent to 39 t CO2 ha-1 yr-1 [30; 50]) with a mean water table depth of 0.40 m, or 63% of the measured total respiration across the plot. In the two older palm sites of six and seven years, only one collar out of 100 had no roots present, and thus a linear random effects model was developed to calculate heterotrophic emissions for different distances from the palm tree. This model suggested that heterotrophic respiration was between 37 - 59% of total respiration in the six year old plantation and 39 - 56% in the seven year old plantation. We applied this method in 2014 to a seven year old plantation, in Sarawak, Malaysia, modifying the method to include the heterotrophic contribution from beneath frond piles and weed covered areas. These results indicated peat based carbon losses to be 0.42 g CO2 m2 hr-1 [0.27;0.59] (equivalent to 37 t CO2 ha-1 yr-1 [24; 52]) at an average water table depth of 0.35 m, 47% of the measured total respiration of the plot. We conclude that, despite a few limitations, it is possible to use a linear modelling approach to partition heterotrophic respiration from the total respiration in oil palm plantations.
Wissmann, F; Reginatto, M; Möller, T
2010-09-01
The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.
Clearwater, Michael J; Luo, Zhiwei; Mazzeo, Mariarosaria; Dichio, Bartolomeo
2009-12-01
The external heat ratio method is described for measurement of low rates of sap flow in both directions through stems and other plant organs, including fruit pedicels, with diameters up to 5 mm and flows less than 2 g h(-1). Calibration was empirical, with heat pulse velocity (v(h)) compared to gravimetric measurements of sap flow. In the four stem types tested (Actinidia sp. fruit pedicels, Schefflera arboricola petioles, Pittosporum crassifolium stems and Fagus sylvatica stems), v(h) was linearly correlated with sap velocity (v(s)) up to a v(s) of approximately 0.007 cm s(-1), equivalent to a flow of 1.8 g h(-1) through a 3-mm-diameter stem. Minimum detectable v(s) was approximately 0.0001 cm s(-1), equivalent to 0.025 g h(-1) through a 3-mm-diameter stem. Sensitivity increased with bark removal. Girdling had no effect on short-term measurements of in vivo sap flow, suggesting that phloem flows were too low to be separated from xylem flows. Fluctuating ambient temperatures increased variability in outdoor sap flow measurements. However, a consistent diurnal time-course of fruit pedicel sap flow was obtained, with flows towards 75-day-old kiwifruit lagging behind evaporative demand and peaking at 0.3 g h(-1) in the late afternoon.
NASA Astrophysics Data System (ADS)
Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava
2012-09-01
A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
LP01 to LP11 mode convertor based on side-polished small-core single-mode fiber
NASA Astrophysics Data System (ADS)
Liu, Yan; Li, Yang; Li, Wei-dong
2018-03-01
An all-fiber LP01-LP11 mode convertor based on side-polished small-core single-mode fibers (SMFs) is numerically demonstrated. The linearly polarized incident beam in one arm experiences π shift through a fiber half waveplate, and the side-polished parts merge into an equivalent twin-core fiber (TCF) which spatially shapes the incident LP01 modes to the LP11 mode supported by the step-index few-mode fiber (FMF). Optimum conditions for the highest conversion efficiency are investigated using the beam propagation method (BPM) with an approximate efficiency as high as 96.7%. The proposed scheme can operate within a wide wavelength range from 1.3 μm to1.7 μm with overall conversion efficiency greater than 95%. The effective mode area and coupling loss are also characterized in detail by finite element method (FEM).
Modelling, analyses and design of switching converters
NASA Technical Reports Server (NTRS)
Cuk, S. M.; Middlebrook, R. D.
1978-01-01
A state-space averaging method for modelling switching dc-to-dc converters for both continuous and discontinuous conduction mode is developed. In each case the starting point is the unified state-space representation, and the end result is a complete linear circuit model, for each conduction mode, which correctly represents all essential features, namely, the input, output, and transfer properties (static dc as well as dynamic ac small-signal). While the method is generally applicable to any switching converter, it is extensively illustrated for the three common power stages (buck, boost, and buck-boost). The results for these converters are then easily tabulated owing to the fixed equivalent circuit topology of their canonical circuit model. The insights that emerge from the general state-space modelling approach lead to the design of new converter topologies through the study of generic properties of the cascade connection of basic buck and boost converters.
NASA Technical Reports Server (NTRS)
Martin, Carl J., Jr.
1996-01-01
This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.
The solution of non-linear hyperbolic equation systems by the finite element method
NASA Technical Reports Server (NTRS)
Loehner, R.; Morgan, K.; Zienkiewicz, O. C.
1984-01-01
A finite-element method for the solution of nonlinear hyperbolic systems of equations, such as those encountered in non-self-adjoint problems of transient phenomena in convection-diffusion or in the mixed representation of wave problems, is developed and demonstrated. The problem is rewritten in moving coordinates and reinterpolated to the original mesh by a Taylor expansion prior to a standard Galerkin spatial discretization, and it is shown that this procedure is equivalent to the time-discretization approach of Donea (1984). Numerical results for sample problems are presented graphically, including such shallow-water problems as the breaking of a dam, the shoaling of a wave, and the outflow of a river; compressible flows such as the isothermal flow in a nozzle and the Riemann shock-tube problem; and the two-dimensional scalar-advection, nonlinear-shallow-water, and Euler equations.
Application of closed-form solutions to a mesh point field in silicon solar cells
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1985-01-01
A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.
NASA Astrophysics Data System (ADS)
Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo
2018-04-01
The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.
Pfannkoch, Edward A; Stuff, John R; Whitecavage, Jacqueline A; Blevins, John M; Seely, Kathryn A; Moran, Jeffery H
2015-01-01
National Oceanic and Atmospheric Administration (NOAA) Method NMFS-NWFSC-59 2004 is currently used to quantitatively analyze seafood for polycyclic aromatic hydrocarbon (PAH) contamination, especially following events such as the Deepwater Horizon oil rig explosion that released millions of barrels of crude oil into the Gulf of Mexico. This method has limited throughput capacity; hence, alternative methods are necessary to meet analytical demands after such events. Stir bar sorptive extraction (SBSE) is an effective technique to extract trace PAHs in water and the quick, easy, cheap, effective, rugged, and safe (QuEChERS) extraction strategy effectively extracts PAHs from complex food matrices. This study uses SBSE to concentrate PAHs and eliminate matrix interference from QuEChERS extracts of seafood, specifically oysters, fish, and shrimp. This method provides acceptable recovery (65-138%) linear calibrations and is sensitive (LOD = 0.02 ppb, LOQ = 0.06 ppb) while providing higher throughput and maintaining equivalency between NOAA 2004 as determined by analysis of NIST SRM 1974b mussel tissue.
Relativistic Ionization with Intense Linearly Polarized Light
NASA Astrophysics Data System (ADS)
Crawford, Douglas Plummer
The Strong Field Approximation (SFA) method is used to derive relativistic ionization rate expressions for ground state hydrogen-like atoms in the presence of an intense electromagnetic field. The emitted particle, which is initially bound to a hydrogen nucleus, is either an electron described by the Dirac equation, with spin effects fully included, or a spinless "electron" described by the Klein-Gordon equation. The derivations and subsequent calculations for both particles are made assuming a linearly polarized electromagnetic field which is monochromatic and which exhibits neither diffraction nor temporal dependence. From each of the relativistic ionization rate expressions, the corresponding expression in the nonrelativistic limit is derived. The resultant expressions are found to be equivalent to those derived using the SFA with the nonrelativistic formalism. This comparison provides the first check of the validity for the core results of this dissertation. Intensity-dependent ionization rates are then calculated for two ultraviolet frequencies using a numerical implementation of the derived expressions. Calculations of ionization rates and related phenomena demonstrate that there are negligible differences between relativistic and nonrelativistic predictions for low intensities. In addition, the differences in behavior between linearly and circularly polarized ionizing fields and between particles with and without spin are explored. The spin comparisons provide additional confidence in the derivations by showing negligible differences between ionization rates for Dirac and Klein -Gordon particles in strong linearly-polarized fields. Also of interest are the differential transition rates which exhibit dynamic profiles as the intensity is increased. This behavior is interpreted as an indication of more atomic influence for linearly polarized electromagnetic (em) fields than for circularly polarized em fields.
Flühs, Dirk; Flühs, Andrea; Ebenau, Melanie; Eichmann, Marion
2015-01-01
Background Dosimetric measurements in small radiation fields with large gradients, such as eye plaque dosimetry with β or low-energy photon emitters, require dosimetrically almost water-equivalent detectors with volumes of <1 mm3 and linear responses over several orders of magnitude. Polyvinyltoluene-based scintillators fulfil these conditions. Hence, they are a standard for such applications. However, they show disadvantages with regard to certain material properties and their dosimetric behaviour towards low-energy photons. Purpose, Materials and Methods Polyethylene naphthalate, recently recognized as a scintillator, offers chemical, physical and basic dosimetric properties superior to polyvinyltoluene. Its general applicability as a clinical dosimeter, however, has not been shown yet. To prove this applicability, extensive measurements at several clinical photon and electron radiation sources, ranging from ophthalmic plaques to a linear accelerator, were performed. Results For all radiation qualities under investigation, covering a wide range of dose rates, a linearity of the detector response to the dose was shown. Conclusion Polyethylene naphthalate proved to be a suitable detector material for the dosimetry of ophthalmic plaques, including low-energy photon emitters and other small radiation fields. Due to superior properties, it has the potential to replace polyvinyltoluene as the standard scintillator for such applications. PMID:27171681
Stability Results for Idealized Shear Flows on a Rectangular Periodic Domain
NASA Astrophysics Data System (ADS)
Dullin, Holger R.; Worthington, Joachim
2018-06-01
We present a new linearly stable solution of the Euler fluid flow on a torus. On a two-dimensional rectangular periodic domain [0,2π )× [0,2π / κ ) for κ \\in R^+, the Euler equations admit a family of stationary solutions given by the vorticity profiles Ω ^*(x)= Γ cos (p_1x_1+ κ p_2x_2). We show linear stability for such flows when p_2=0 and κ ≥ |p_1| (equivalently p_1=0 and κ {|p_2|}≤ {1}). The classical result due to Arnold is that for p_1 = 1, p_2 = 0 and κ ≥ 1 the stationary flow is nonlinearly stable via the energy-Casimir method. We show that for κ ≥ |p_1| ≥ 2, p_2 = 0 the flow is linearly stable, but one cannot expect a similar nonlinear stability result. Finally we prove nonlinear instability for all steady states satisfying p_1^2+κ ^2{p_2^2}>{3(κ ^2+1)}/4(7-4√{3)}. The modification and application of a structure-preserving Hamiltonian truncation is discussed for the anisotropic case κ ≠ 1. This leads to an explicit Lie-Poisson integrator for the approximate system, which is used to illustrate our analytical results.
Modeling of second order space charge driven coherent sum and difference instabilities
NASA Astrophysics Data System (ADS)
Yuan, Yao-Shuo; Boine-Frankenheim, Oliver; Hofmann, Ingo
2017-10-01
Second order coherent oscillation modes in intense particle beams play an important role for beam stability in linear or circular accelerators. In addition to the well-known second order even envelope modes and their instability, coupled even envelope modes and odd (skew) modes have recently been shown in [Phys. Plasmas 23, 090705 (2016), 10.1063/1.4963851] to lead to parametric instabilities in periodic focusing lattices with sufficiently different tunes. While this work was partly using the usual envelope equations, partly also particle-in-cell (PIC) simulation, we revisit these modes here and show that the complete set of second order even and odd mode phenomena can be obtained in a unifying approach by using a single set of linearized rms moment equations based on "Chernin's equations." This has the advantage that accurate information on growth rates can be obtained and gathered in a "tune diagram." In periodic focusing we retrieve the parametric sum instabilities of coupled even and of odd modes. The stop bands obtained from these equations are compared with results from PIC simulations for waterbag beams and found to show very good agreement. The "tilting instability" obtained in constant focusing confirms the equivalence of this method with the linearized Vlasov-Poisson system evaluated in second order.
Dual RBFNNs-Based Model-Free Adaptive Control With Aspen HYSYS Simulation.
Zhu, Yuanming; Hou, Zhongsheng; Qian, Feng; Du, Wenli
2017-03-01
In this brief, we propose a new data-driven model-free adaptive control (MFAC) method with dual radial basis function neural networks (RBFNNs) for a class of discrete-time nonlinear systems. The main novelty lies in that it provides a systematic design method for controller structure by the direct usage of I/O data, rather than using the first-principle model or offline identified plant model. The controller structure is determined by equivalent-dynamic-linearization representation of the ideal nonlinear controller, and the controller parameters are tuned by the pseudogradient information extracted from the I/O data of the plant, which can deal with the unknown nonlinear system. The stability of the closed-loop control system and the stability of the training process for RBFNNs are guaranteed by rigorous theoretical analysis. Meanwhile, the effectiveness and the applicability of the proposed method are further demonstrated by the numerical example and Aspen HYSYS simulation of distillation column in crude styrene produce process.
Pare, Guillaume; Mao, Shihong; Deng, Wei Q
2016-06-08
Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance.
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
The quality of fundamental vibrational frequencies determined using the CCSD(T) method (singles and doubles coupled-cluster theory plus a perturbational estimate of the effects of connected triple excitations) is shown to be very good, usually predicting band centers to within plus or minus 8 per centimeter. This approach is applied to several molecules of interest in atmospheric chemistry, such as HNO, cis-FONO, cis-ClONO, and ClOOH. The HNO molecule displays a large and unusual anharmonicity in the H-N stretch. For the calculation of ultraviolet (UV) spectra, the linear response CCSD (LRCCSD) approach (which is equivalent to EOM-CCSD) has been shown to yield vertical excitation energies that are accurate to approximately equal to 0.1 eV for singly excited electronic states. This method together with more approximate methods is used to examine the UV spectra of several molecules important in stratospheric chemistry, including HOCl, Cl2O, ClONO2, HONO2, ClOOCl, ClOOH, and HOOH.
Pare, Guillaume; Mao, Shihong; Deng, Wei Q.
2016-01-01
Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance. PMID:27273519
Code of Federal Regulations, 2010 CFR
2010-07-01
... Methods for Air Monitoring of Criteria Pollutants Pollutant Ref. or equivalent Manual or automated Applicable part 50 appendix Applicable subparts of part 53 A B C D E F SO2 Reference Manual A Equivalent Manual ✓ ✓ Automated ✓ ✓ ✓ CO Reference Automated C ✓ ✓ Equivalent Manual ✓ ✓ Automated ✓ ✓ ✓ O3...
Research on the time-temperature-damage superposition principle of NEPE propellant
NASA Astrophysics Data System (ADS)
Han, Long; Chen, Xiong; Xu, Jin-sheng; Zhou, Chang-sheng; Yu, Jia-quan
2015-11-01
To describe the relaxation behavior of NEPE (Nitrate Ester Plasticized Polyether) propellant, we analyzed the equivalent relationships between time, temperature, and damage. We conducted a series of uniaxial tensile tests and employed a cumulative damage model to calculate the damage values for relaxation tests at different strain levels. The damage evolution curve of the tensile test at 100 mm/min was obtained through numerical analysis. Relaxation tests were conducted over a range of temperature and strain levels, and the equivalent relationship between time, temperature, and damage was deduced based on free volume theory. The equivalent relationship was then used to generate predictions of the long-term relaxation behavior of the NEPE propellant. Subsequently, the equivalent relationship between time and damage was introduced into the linear viscoelastic model to establish a nonlinear model which is capable of describing the mechanical behavior of composite propellants under a uniaxial tensile load. The comparison between model prediction and experimental data shows that the presented model provides a reliable forecast of the mechanical behavior of propellants.
Dust in a compact, cold, high-velocity cloud: A new approach to removing foreground emission
NASA Astrophysics Data System (ADS)
Lenz, D.; Flöer, L.; Kerp, J.
2016-02-01
Context. Because isolated high-velocity clouds (HVCs) are found at great distances from the Galactic radiation field and because they have subsolar metallicities, there have been no detections of dust in these structures. A key problem in this search is the removal of foreground dust emission. Aims: Using the Effelsberg-Bonn H I Survey and the Planck far-infrared data, we investigate a bright, cold, and clumpy HVC. This cloud apparently undergoes an interaction with the ambient medium and thus has great potential to form dust. Methods: To remove the local foreground dust emission we used a regularised, generalised linear model and we show the advantages of this approach with respect to other methods. To estimate the dust emissivity of the HVC, we set up a simple Bayesian model with mildly informative priors to perform the line fit instead of an ordinary linear least-squares approach. Results: We find that the foreground can be modelled accurately and robustly with our approach and is limited mostly by the cosmic infrared background. Despite this improvement, we did not detect any significant dust emission from this promising HVC. The 3σ-equivalent upper limit to the dust emissivity is an order of magnitude below the typical values for the Galactic interstellar medium.
Rigatos, Gerasimos G; Rigatou, Efthymia G; Djida, Jean Daniel
2015-10-01
A method for early diagnosis of parametric changes in intracellular protein synthesis models (e.g. the p53 protein - mdm2 inhibitor model) is developed with the use of a nonlinear Kalman Filtering approach (Derivative-free nonlinear Kalman Filter) and of statistical change detection methods. The intracellular protein synthesis dynamic model is described by a set of coupled nonlinear differential equations. It is shown that such a dynamical system satisfies differential flatness properties and this allows to transform it, through a change of variables (diffeomorphism), to the so-called linear canonical form. For the linearized equivalent of the dynamical system, state estimation can be performed using the Kalman Filter recursion. Moreover, by applying an inverse transformation based on the previous diffeomorphism it becomes also possible to obtain estimates of the state variables of the initial nonlinear model. By comparing the output of the Kalman Filter (which is assumed to correspond to the undistorted dynamical model) with measurements obtained from the monitored protein synthesis system, a sequence of differences (residuals) is obtained. The statistical processing of the residuals with the use of x2 change detection tests, can provide indication within specific confidence intervals about parametric changes in the considered biological system and consequently indications about the appearance of specific diseases (e.g. malignancies).